DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The claims have been examined using the effective filing date of 09/30/2021.
Information Disclosure Statement
All Information Disclosure Statements filed as of 02/19/2025 have been considered by examiner.
Claim Objections
Claim 2 is objected to because of the following informalities:
(line 1) “wherein the rendering the” typo
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 12 rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because it is directed towards a computer-readable storage medium. This is inclusive of transitory signals (signals per se).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-12 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zou (CN 109859211 A).
PNG
media_image1.png
344
354
media_image1.png
Greyscale
Translated Zou figure 4
With respect to claim 1, Zou teaches an image editing method (see Translated Zou figure 4), comprising: obtaining an original image (see Translated Zou figure 4 S201), wherein the original image comprises a first editing object (see Translated Zou figure 4 S202 and S203 and “S203, creating a plurality of layers; adding respectively into one area of the image in each of the layers, so as to edit the different area images in different layers. Because the image to be processed is divided into a plurality of region image, a plurality of region image may be a plurality of content areas divided according to the content, also can be multi-area image according to user requirement setting division line for dividing, after the division, the each area image are placed on a corresponding layer…” pages 9 (bottom) – page 10 (top)) and a second editing object (see Translated Zou figure 4 S202 and S203 and “S203, creating a plurality of layers; adding respectively into one area of the image in each of the layers, so as to edit the different area images in different layers. Because the image to be processed is divided into a plurality of region image, a plurality of region image may be a plurality of content areas divided according to the content, also can be multi-area image according to user requirement setting division line for dividing, after the division, the each area image are placed on a corresponding layer…” pages 9 (bottom) – page 10 (top)), and the first editing object and the second editing object are in different image regions of the original image (see Translated Zou figure 4 S202 and “S202, dividing the to-be-processed image to obtain a plurality of area images;” page 9 line 21); rendering the first editing object and the second editing object to different layers respectively, to generate a first original layer corresponding to the first editing object (see Translated Zou figure 4 S203 and “S203, creating a plurality of layers; adding respectively into one area of the image in each of the layers, so as to edit the different area images in different layers. Because the image to be processed is divided into a plurality of region image, a plurality of region image may be a plurality of content areas divided according to the content, also can be multi-area image according to user requirement setting division line for dividing, after the division, the each area image are placed on a corresponding layer…” pages 9 (bottom) – page 10 (top)) and a second original layer corresponding to the second editing object (see Translated Zou figure 4 S203 and “S203, creating a plurality of layers; adding respectively into one area of the image in each of the layers, so as to edit the different area images in different layers. Because the image to be processed is divided into a plurality of region image, a plurality of region image may be a plurality of content areas divided according to the content, also can be multi-area image according to user requirement setting division line for dividing, after the division, the each area image are placed on a corresponding layer…” pages 9 (bottom) – page 10 (top)); rendering an editing result of the first editing object under a first editing operation based on the first original layer to generate a first editing layer corresponding to the first editing object in response to the first editing operation for the first editing object (Translated Zou figure 4 S204 and S205 ), and rendering an editing result of the second editing object under a second editing operation based on the second original layer to generate a second editing layer corresponding to the second editing object in response to the second editing operation for the second editing object (Translated Zou figure 4 S204 and S205 ); and generating, based on the first editing layer and the second editing layer, a target image as an editing result of the original image (Translated Zou figure 4 S206).
With respect to claim 2, Zou teaches the image editing method according to claim 1, wherein the rendering the first editing object and the second editing object to different layers respectively, to generate a first original layer corresponding to the first editing object and a second original layer corresponding to the second editing object comprises: creating layers corresponding to the first editing object and the second editing object, respectively (see Translated Zou figure 4 S203 and “S203, creating a plurality of layers; adding respectively into one area of the image in each of the layers, so as to edit the different area images in different layers. Because the image to be processed is divided into a plurality of region image, a plurality of region image may be a plurality of content areas divided according to the content, also can be multi-area image according to user requirement setting division line for dividing, after the division, the each area image are placed on a corresponding layer…” pages 9 (bottom) – page 10 (top)); extracting separately image data corresponding to the first editing object and image data corresponding to the second editing object from the original image (see Translated Zou figure 4 S202 and S203 and “S203, creating a plurality of layers; adding respectively into one area of the image in each of the layers, so as to edit the different area images in different layers. Because the image to be processed is divided into a plurality of region image, a plurality of region image may be a plurality of content areas divided according to the content, also can be multi-area image according to user requirement setting division line for dividing, after the division, the each area image are placed on a corresponding layer…” pages 9 (bottom) – page 10 (top)); and rendering the layer corresponding to the first editing object based on the image data corresponding to the first editing object to generate the first original layer corresponding to the first editing object (see Translated Zou figure 4 S203 and “S203, creating a plurality of layers; adding respectively into one area of the image in each of the layers, so as to edit the different area images in different layers. Because the image to be processed is divided into a plurality of region image, a plurality of region image may be a plurality of content areas divided according to the content, also can be multi-area image according to user requirement setting division line for dividing, after the division, the each area image are placed on a corresponding layer, so that subsequent time the area image editing for editing the picture layer of corresponding, it is convenient and shortcut, and easy operation.” pages 9 (bottom) – page 10 (top)), and rendering the layer corresponding to the second editing object based on the image data corresponding to the second editing object to generate the second original layer corresponding to the second editing object (see Translated Zou figure 4 S203 and “S203, creating a plurality of layers; adding respectively into one area of the image in each of the layers, so as to edit the different area images in different layers. Because the image to be processed is divided into a plurality of region image, a plurality of region image may be a plurality of content areas divided according to the content, also can be multi-area image according to user requirement setting division line for dividing, after the division, the each area image are placed on a corresponding layer, so that subsequent time the area image editing for editing the picture layer of corresponding, it is convenient and shortcut, and easy operation.” pages 9 (bottom) – page 10 (top)).
With respect for claim 3, Zou teaches the image editing method according to claim 2, wherein the extracting separately image data corresponding to the first editing object and image data corresponding to the second editing object from the original image comprises: determining an outline of the first editing object (“S202, dividing the to-be-processed image to obtain a plurality of area images;
may be, analysis of the content region in the image to be processed; extracting the outline edge of each content region; dividing the to-be-processed image according to the outline edge of the content region, obtain the region image corresponding to the each content region.” Page 9 lines 21-25) and an outline of the second editing object from the original image (“S202, dividing the to-be-processed image to obtain a plurality of area images; may be, analysis of the content region in the image to be processed; extracting the outline edge of each content region; dividing the to-be-processed image according to the outline edge of the content region, obtain the region image corresponding to the each content region.” Page 9 lines 21-25); and extracting image data within the outline of the first editing object to obtain the image data corresponding to the first editing object (“S202, dividing the to-be-processed image to obtain a plurality of area images; may be, analysis of the content region in the image to be processed; extracting the outline edge of each content region; dividing the to-be-processed image according to the outline edge of the content region, obtain the region image corresponding to the each content region.” Page 9 lines 21-25), and extracting image data within the outline of the second editing object to obtain the image data corresponding to the second editing object (“S202, dividing the to-be-processed image to obtain a plurality of area images; may be, analysis of the content region in the image to be processed; extracting the outline edge of each content region; dividing the to-be-processed image according to the outline edge of the content region, obtain the region image corresponding to the each content region.” Page 9 lines 21-25).
With respect for claim 4, Zou teaches the image editing method according to claim 2, wherein the extracting separately image data corresponding to the first editing object and image data corresponding to the second editing object from the original image comprises: determining an outline of the first editing object and an outline of the second editing object from the original image (“S202, dividing the to-be-processed image to obtain a plurality of area images; may be, analysis of the content region in the image to be processed; extracting the outline edge of each content region; dividing the to-be-processed image according to the outline edge of the content region, obtain the region image corresponding to the each content region.” Page 9 lines 21-25); determining a first image data extraction region corresponding to the first editing object and a second image data extraction region corresponding to the second editing object based on the outline of the first editing object and the outline of the second editing object, respectively (“S202, dividing the to-be-processed image to obtain a plurality of area images; may be, analysis of the content region in the image to be processed; extracting the outline edge of each content region; dividing the to-be-processed image according to the outline edge of the content region, obtain the region image corresponding to the each content region.” Page 9 lines 21-25), wherein the outline of the first editing object is in the first image data extraction region (“S202, dividing the to-be-processed image to obtain a plurality of area images; may be, analysis of the content region in the image to be processed; extracting the outline edge of each content region; dividing the to-be-processed image according to the outline edge of the content region, obtain the region image corresponding to the each content region.” Page 9 lines 21-25), and the outline of the second editing object is in the second image data extraction region (“S202, dividing the to-be-processed image to obtain a plurality of area images; may be, analysis of the content region in the image to be processed; extracting the outline edge of each content region; dividing the to-be-processed image according to the outline edge of the content region, obtain the region image corresponding to the each content region.” Page 9 lines 21-25); and extracting image data within the first image data extraction region to obtain the image data corresponding to the first editing object (“S202, dividing the to-be-processed image to obtain a plurality of area images; may be, analysis of the content region in the image to be processed; extracting the outline edge of each content region; dividing the to-be-processed image according to the outline edge of the content region, obtain the region image corresponding to the each content region.” Page 9 lines 21-25), and extracting image data within the second image data extraction region to obtain the image data corresponding to the second editing object (“S202, dividing the to-be-processed image to obtain a plurality of area images; may be, analysis of the content region in the image to be processed; extracting the outline edge of each content region; dividing the to-be-processed image according to the outline edge of the content region, obtain the region image corresponding to the each content region.” Page 9 lines 21-25).
PNG
media_image2.png
641
782
media_image2.png
Greyscale
Translated Zou Figure 1
With respect to claim 10, Zou teaches an image editing apparatus (“…the invention provides a mobile terminal, said mobile terminal comprising a memory and a processor, the memory storing an image processing computer program, the processor executing the computer program to realize the image processing method according to any method.” Page 3 lines 21-23 and Translated Zou Figure 1), comprising: an obtaining unit, configured to obtain an original image (“A/V input unit 104 configured to receive an audio or video signal. A/V input unit 104 may include a graphics processor (Graphics Processing Unit (GPU) 1041 and microphone 1042, the graphics processor 1041 to the video capture mode or the image capturing mode in the image capturing image data of static image or video obtained by the device (such as a camera) for processing” page 5 paragraph 5 lines 1-4 and Translated Zou Figure 1 104), wherein the original image comprises a first editing object and a second editing object (see Translated Zou figure 4 S202 and S203 and “S203, creating a plurality of layers; adding respectively into one area of the image in each of the layers, so as to edit the different area images in different layers. Because the image to be processed is divided into a plurality of region image, a plurality of region image may be a plurality of content areas divided according to the content, also can be multi-area image according to user requirement setting division line for dividing, after the division, the each area image are placed on a corresponding layer…” pages 9 (bottom) – page 10 (top)), and the first editing object and the second editing object are in different image regions of the original image (see Translated Zou figure 4 S202 and “S202, dividing the to-be-processed image to obtain a plurality of area images;” page 9 line 21); a generation unit (see Translated Zou figure 1 106 and 110 “processed image frame can be displayed on the display unit 106. ” page 5 paragraph 5 lines 4-5), configured to render the first editing object and the second editing object to different layers, respectively (see Translated Zou figure 4 S203 and “S203, creating a plurality of layers; adding respectively into one area of the image in each of the layers, so as to edit the different area images in different layers. Because the image to be processed is divided into a plurality of region image, a plurality of region image may be a plurality of content areas divided according to the content, also can be multi-area image according to user requirement setting division line for dividing, after the division, the each area image are placed on a corresponding layer, so that subsequent time the area image editing for editing the picture layer of corresponding, it is convenient and shortcut, and easy operation.” pages 9 (bottom) – page 10 (top) and Translated Zou Figure 1 106), to generate a first original layer corresponding to the first editing object and a second original layer corresponding to the second editing object (see Translated Zou figure 4 S203 and “S203, creating a plurality of layers; adding respectively into one area of the image in each of the layers, so as to edit the different area images in different layers. Because the image to be processed is divided into a plurality of region image, a plurality of region image may be a plurality of content areas divided according to the content, also can be multi-area image according to user requirement setting division line for dividing, after the division, the each area image are placed on a corresponding layer, so that subsequent time the area image editing for editing the picture layer of corresponding, it is convenient and shortcut, and easy operation.” pages 9 (bottom) – page 10 (top) and Translated Zou Figure 1 106); an editing unit (“processed image frame can be displayed on the display unit 106. ” page 5 paragraph 5 lines 4-5), configured to render an editing result of the first editing object under a first editing operation based on the first original layer to generate a first editing layer corresponding to the first editing object, in response to the first editing operation for the first editing object (“processed image frame can be displayed on the display unit 106. ” page 5 paragraph 5 lines 4-5 and Translated Zou figure 4 S204 and S205 and Translated Zou figure 1 106 ), and render an editing result of the second editing object under a second editing operation based on the second original layer to generate a second editing layer corresponding to the second editing object in response to the second editing operation for the second editing object (“processed image frame can be displayed on the display unit 106. ” page 5 paragraph 5 lines 4-5 and Translated Zou figure 4 S204 and S205 and Translated Zou figure 1 106); and a processing unit, configured to generate, based on the first editing layer and the second editing layer, a target image as an editing result of the original image (“…the invention provides a mobile terminal, said mobile terminal comprising a memory and a processor, the memory storing an image processing computer program, the processor executing the computer program to realize the image processing method according to any method.” Page 3 lines 21-23 and Translated Zou Figure 110 and Translated Zou figure 4 S206).
With respect to claim 11, Zou teaches an electronic device, comprising a memory and a processor, wherein the memory is configured to store a computer program; and the processor is configured to, when invoking the computer program, cause the electronic device to implement an image editing method (“…the invention provides a mobile terminal, said mobile terminal comprising a memory and a processor, the memory storing an image processing computer program, the processor executing the computer program to realize the image processing method according to any method.” Page 3 lines 21-23), and the image editing method comprises:obtaining an original image see Translated Zou figure 4 S201), wherein the original image comprises a first editing object (see Translated Zou figure 4 S202 and S203 and “S203, creating a plurality of layers; adding respectively into one area of the image in each of the layers, so as to edit the different area images in different layers. Because the image to be processed is divided into a plurality of region image, a plurality of region image may be a plurality of content areas divided according to the content, also can be multi-area image according to user requirement setting division line for dividing, after the division, the each area image are placed on a corresponding layer…” pages 9 (bottom) – page 10 (top)) and a second editing object (see Translated Zou figure 4 S202 and S203 and “S203, creating a plurality of layers; adding respectively into one area of the image in each of the layers, so as to edit the different area images in different layers. Because the image to be processed is divided into a plurality of region image, a plurality of region image may be a plurality of content areas divided according to the content, also can be multi-area image according to user requirement setting division line for dividing, after the division, the each area image are placed on a corresponding layer…” pages 9 (bottom) – page 10 (top)), and the first editing object and the second editing object are in different image regions of the original image (see Translated Zou figure 4 S202 and “S202, dividing the to-be-processed image to obtain a plurality of area images;” page 9 line 21);rendering the first editing object and the second editing object to different layers respectively (see Translated Zou figure 4 S203 and “S203, creating a plurality of layers; adding respectively into one area of the image in each of the layers, so as to edit the different area images in different layers. Because the image to be processed is divided into a plurality of region image, a plurality of region image may be a plurality of content areas divided according to the content, also can be multi-area image according to user requirement setting division line for dividing, after the division, the each area image are placed on a corresponding layer…” pages 9 (bottom) – page 10 (top)), to generate a first original layer corresponding to the first editing object (see Translated Zou figure 4 S203 and “S203, creating a plurality of layers; adding respectively into one area of the image in each of the layers, so as to edit the different area images in different layers. Because the image to be processed is divided into a plurality of region image, a plurality of region image may be a plurality of content areas divided according to the content, also can be multi-area image according to user requirement setting division line for dividing, after the division, the each area image are placed on a corresponding layer…” pages 9 (bottom) – page 10 (top)) and a second original layer corresponding to the second editing object (see Translated Zou figure 4 S203 and “S203, creating a plurality of layers; adding respectively into one area of the image in each of the layers, so as to edit the different area images in different layers. Because the image to be processed is divided into a plurality of region image, a plurality of region image may be a plurality of content areas divided according to the content, also can be multi-area image according to user requirement setting division line for dividing, after the division, the each area image are placed on a corresponding layer…” pages 9 (bottom) – page 10 (top));rendering an editing result of the first editing object under a first editing operation based on the first original layer to generate a first editing layer corresponding to the first editing object in response to the first editing operation for the first editing object (Translated Zou figure 4 S204 and S205 ), and rendering an editing result of the second editing object under a second editing operation based on the second original layer to generate a second editing layer corresponding to the second editing object in response to the second editing operation for the second editing object (Translated Zou figure 4 S204 and S205 ); and generating, based on the first editing layer and the second editing layer, a target image as an editing result of the original image (Translated Zou figure 4 S206).
With respect to claim 12, Zou teaches a computer-readable storage medium storing a computer program, wherein the computer program, when executed by a computing device, causes the computing device to implement the image editing method according to claim 1 (“…the invention provides a mobile terminal, said mobile terminal comprising a memory and a processor, the memory storing an image processing computer program, the processor executing the computer program to realize the image processing method according to any method.” Page 3 lines 21-23).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 5-6 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Zou as applied to claims 1-4 above, and further in view of Liba (WO 2021010974 A1).
With respect to claim 5, Zou teaches the image editing method according to claim 4, and a plurality of editing objects (see Translated Zou figure 4 S203 and “S203, creating a plurality of layers; adding respectively into one area of the image in each of the layers, so as to edit the different area images in different layers. Because the image to be processed is divided into a plurality of region image, a plurality of region image may be a plurality of content areas divided according to the content, also can be multi-area image according to user requirement setting division line for dividing, after the division, the each area image are placed on a corresponding layer…” pages 9 (bottom) – page 10 (top)), but does not teach any further limitations.
Liba teaches smoothing an edge an editing object (“The computing device executes a machine-learned model that is trained to automatically segment an“original” image (e.g., a raw image, a low-resolution variant, or an enhanced version) into distinct regions. The model outputs a mask that defines the distinct regions and the computing device then refines the mask using edge- aware smoothing techniques…” paragraph 0002) based on a preset smoothing algorithm (“guided filter…” paragraph 0002).
Liba is analogous art in the same field of endeavor as the claimed invention. Liba is directed towards image segmentation and adjusting (“A computing device is described that automatically segments an image into different regions and automatically adjusts perceived exposure-levels, noise, white balance, or other characteristics associated with each of the different regions. The computing device executes a machine-learned model that is trained to automatically segment an“original” image (e.g., a raw image, a low-resolution variant, or an enhanced version) into distinct regions.” Paragraph 0002). A person of ordinary skill in the art before the effective filing date of the claimed invention, would have found it obvious to combine the teachings of Zou and Liba by utilizing Liba’s edge smoothing technique in combination with Zou’s image segmentation process, with the expectation that doing so would allow for the image outlines to be further refined (“The model outputs a mask that defines the distinct regions and the computing device then refines the mask using edge- aware smoothing techniques, such as a guided filter, to conform the edges of the mask to the edges of objects depicted in the image.” Paragraph 0002).
With respect to claim 6, Zou teaches the image editing method according to claim 2, a plurality of layers (see Translated Zou figure 4 S203 and “S203, creating a plurality of layers; adding respectively into one area of the image in each of the layers, so as to edit the different area images in different layers. Because the image to be processed is divided into a plurality of region image, a plurality of region image may be a plurality of content areas divided according to the content, also can be multi-area image according to user requirement setting division line for dividing, after the division, the each area image are placed on a corresponding layer…” pages 9 (bottom) – page 10 (top)), and further teaches overlaying the first editing layer and the second editing layer on the original image, respectively, to generate the target image (Translated Zou figure 4 S206).
Liba teaches setting a transparency of an edge region (“Part of refining the mask 210 can include adding matting to each of the multiple regions to add transparency at parts of the original image 208. For example, matting can be added with a particular transparency value to smoothly transition from adjusting one region (e.g, for sky adjustments) to another region (e.g., for non-sky adjustments) in regions where pixels could be considered part of the two regions (e.g, part of sky and part of non-sky). Such mixed pixels can occur, for example, along obj ect edges, or near semi-transparent obj ects (e.g, like frizzy hair)” paragraph 0053), wherein the edge region is a region in an editing layer except for a region corresponding to an editing object (“Part of refining the mask 210 can include adding matting to each of the multiple regions to add transparency at parts of the original image 208. For example, matting can be added with a particular transparency value to smoothly transition from adjusting one region (e.g, for sky adjustments) to another region (e.g., for non-sky adjustments) in regions where pixels could be considered part of the two regions (e.g, part of sky and part of non-sky). Such mixed pixels can occur, for example, along obj ect edges, or near semi-transparent obj ects (e.g, like frizzy hair)” paragraph 0053).
Liba is analogous art in the same field of endeavor as the claimed invention. Liba is directed towards image segmentation and adjusting (“A computing device is described that automatically segments an image into different regions and automatically adjusts perceived exposure-levels, noise, white balance, or other characteristics associated with each of the different regions. The computing device executes a machine-learned model that is trained to automatically segment an“original” image (e.g., a raw image, a low-resolution variant, or an enhanced version) into distinct regions.” Paragraph 0002). A person of ordinary skill in the art before the effective filing date of the claimed invention, would have found it obvious to combine the teachings of Zou and Liba by utilizing Liba’s edge smoothing technique in combination with Zou’s image segmentation process, with the expectation that doing so would allow for the image outlines to be further refined (“The model outputs a mask that defines the distinct regions and the computing device then refines the mask using edge- aware smoothing techniques, such as a guided filter, to conform the edges of the mask to the edges of objects depicted in the image.” Paragraph 0002).
With respect to claim 9, Zou and Liba teach the image editing method according to claim 6. Zou additionally teaches a plurality of layers (see Translated Zou figure 4 S203 and “S203, creating a plurality of layers; adding respectively into one area of the image in each of the layers, so as to edit the different area images in different layers. Because the image to be processed is divided into a plurality of region image, a plurality of region image may be a plurality of content areas divided according to the content, also can be multi-area image according to user requirement setting division line for dividing, after the division, the each area image are placed on a corresponding layer…” pages 9 (bottom) – page 10 (top)).
Liba further teaches before the setting a transparency of an edge region to a preset transparency, further comprising: determining whether the layer comprise a shrank region (“The guided filter 202 generates the refined mask 212 which redefines the different regions that are specified by the mask 210, to have edges that match the edges of objects in the original image 208 and that further align the boundaries of the different regions that are specified by the mask 210 to conform to the color variations at the visible boundaries of the different regions in the original image 208. Part of refining the mask 210 can include adding matting to each of the multiple regions to add transparency at parts of the original image 208…Such mixed pixels can occur, for example, along obj ect edges, or near semi-transparent obj ects (e.g, like frizzy hair).” Paragraph 0053), wherein the shrank region is a region that changes from the region corresponding to the editing object to the edge region after editing (“The guided filter 202 generates the refined mask 212 which redefines the different regions that are specified by the mask 210, to have edges that match the edges of objects in the original image 208 and that further align the boundaries of the different regions that are specified by the mask 210 to conform to the color variations at the visible boundaries of the different regions in the original image 208. Part of refining the mask 210 can include adding matting to each of the multiple regions to add transparency at parts of the original image 208…Such mixed pixels can occur, for example, along obj ect edges, or near semi-transparent obj ects (e.g, like frizzy hair).” Paragraph 0053); and in response to the layer comprising the shrank region, performing edge transitional filling on the shrank region (“The guided filter 202 generates the refined mask 212 which redefines the different regions that are specified by the mask 210, to have edges that match the edges of objects in the original image 208 and that further align the boundaries of the different regions that are specified by the mask 210 to conform to the color variations at the visible boundaries of the different regions in the original image 208. Part of refining the mask 210 can include adding matting to each of the multiple regions to add transparency at parts of the original image 208…Such mixed pixels can occur, for example, along obj ect edges, or near semi-transparent obj ects (e.g, like frizzy hair).” Paragraph 0053), and setting the shrank region to the region corresponding to the editing object (“The guided filter 202 generates the refined mask 212 which redefines the different regions that are specified by the mask 210, to have edges that match the edges of objects in the original image 208 and that further align the boundaries of the different regions that are specified by the mask 210 to conform to the color variations at the visible boundaries of the different regions in the original image 208. Part of refining the mask 210 can include adding matting to each of the multiple regions to add transparency at parts of the original image 208…Such mixed pixels can occur, for example, along obj ect edges, or near semi-transparent obj ects (e.g, like frizzy hair).” Paragraph 0053).
Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Zou and Liba as applied to claim 6 above, and further in view of Larking (EP 1746542 A1).
With respect to claim 7, Zou and Liba teach the image editing method according to claim 6, but do not teach any further limitations. Larking teaches determining whether a region corresponding to the first editing object of the first editing layer overlaps a region corresponding to the second editing object of the second editing layer (“According to still another aspect of the present invention there is provided a method for composing a two-dimensional digital image, comprising the steps of: providing at least two two-dimensional image layers, each layer being associated with at least a subsection of said digital image, wherein each layer consists of several image elements; associating each image element of the layers with a depth value; and combining said layers for composition of said digital image, wherein in overlapping parts of said layers, the order of overlapping image elements is determined based on the depth values associated with said overlapping image elements.” Paragraph 0025 ); and in response to the region corresponding to the first editing object of the first editing layer not overlapping the region corresponding to the second editing object of the second editing layer, overlaying the first editing layer and the second editing layer on the original image in any order to generate the target image (“As will be understood, each of these layers can be shuffled and/or exchanged, resulting in a new image, wherein for example one or a plurality of the layers have been exchanged with new layers. From this illustrative example, it is apparent that the layers could be modified or exchanged separately, without affecting the other layers. For example, layer 143 could be exchanged, e.g. to an image of another car, but assuming that the new layer has the same depth value, this new layer would still be placed behind the person illustrated in layer 144 and in front of the tree illustrated in layer 142.” Paragraph 0037).
Larking is analogous art in the same field of endeavor as the claimed invention. Larking is directed towards image layer composition (“According to still another aspect of the present invention there is provided a method for composing a two-dimensional digital image, comprising the steps of: providing at least two two-dimensional image layers, each layer being associated with at least a subsection of said digital image, wherein each layer consists of several image elements; associating each image element of the layers with a depth value; and combining said layers for composition of said digital image, wherein in overlapping parts of said layers, the order of overlapping image elements is determined based on the depth values associated with said overlapping image elements.” Paragraph 0025 ). A person of ordinary skill in the art before the effective filing date of the claimed invention, would have found it obvious to combine the teachings of Zou and Liba, and Larking, by utilizing Larking’s layer composition strategy with the expectation that doing so would lead to improvements in processing speed and reductions in cost (“Hence, this first aspect of the present invention makes it possible for a user to view and change between different components and accessories in a novel way, which, in comparison to heretofore known methods, requires less bandwidth and processing capacity, which is faster and where the download time can be reduced. In a similar manner, this aspect makes it possible to decrease the amount of data needed to be stored on a storage medium. Accordingly, these advantages will hence decrease costs involved for both the user and service provider providing the image data.” Paragraph 0014).
With respect to claim 8, Zou, Liba, and Larking teach the image editing method according to claim 7, Larking further teaches wherein in response to the region corresponding to the first editing object of the first editing layer overlapping the region corresponding to the second editing object of the second editing layer, obtaining a depth of the first editing object and a depth of the second editing object (“According to still another aspect of the present invention there is provided a method for composing a two-dimensional digital image, comprising the steps of: providing at least two two-dimensional image layers, each layer being associated with at least a subsection of said digital image, wherein each layer consists of several image elements; associating each image element of the layers with a depth value; and combining said layers for composition of said digital image, wherein in overlapping parts of said layers, the order of overlapping image elements is determined based on the depth values associated with said overlapping image elements.” Paragraph 0025 ); and overlaying the first editing layer and the second editing layer on the original image in a depth descending order to generate the target image (“According to still another aspect of the present invention there is provided a method for composing a two-dimensional digital image, comprising the steps of: providing at least two two-dimensional image layers, each layer being associated with at least a subsection of said digital image, wherein each layer consists of several image elements; associating each image element of the layers with a depth value; and combining said layers for composition of said digital image, wherein in overlapping parts of said layers, the order of overlapping image elements is determined based on the depth values associated with said overlapping image elements.” Paragraph 0025 and figure 2).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
CAMPANELLI (EP 0712096 A2): discloses an image editor that allows for the independent editing of selected objects.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to REBECCA C WILLIAMS whose telephone number is (571)272-7074. The examiner can normally be reached M-F 7:30am - 4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew W Bee can be reached at (571)270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/REBECCA COLETTE WILLIAMS/Examiner, Art Unit 2677
/ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677