DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claim(s) 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US 2014/0267867 A1) hereinafter referenced as Lee in view of Dye et al. (US 2019/0342507 A1) hereinafter referenced as Dye.
Regarding claim 1, Lee discloses A method for interface display, comprising:
displaying a captured-image region (1310; figs. 14-21) and a material display region (1401-1406; figs. 14-21) on a first interface in response to a first operation (Pressing effect menu button 1302; [0063]), wherein an association relationship exists between the captured-image region and at least one of a size or a display position of the material display region (The position of the effect menu overlaps with the captured image region. This is an “association relationship” as broadly claimed.); and
displaying a first image (Image of car; fig. 14) in the captured-image region,
displaying, in the material display region, at least one material identifier (Plurality of filter previews; figs. 14-21) corresponding to at least one capturing material (Filter effect),
displaying a capturing control (1303; figs. 14-21) on the first interface, wherein an image content in the first image comprises a framing content of a capturing function corresponding to the first interface (The image is an image being captured by the image sensor which can be captured when camera button 1303 is pressed; [0063]); and
capturing based on a first capturing material corresponding to a first material identifier, in response to the capturing control being triggered, wherein the first material identifier is selected (Filter icons are selected and when the camera button 1303 is pressed after selecting the filter icons, an image is capture with the selected filters; [0074]; figs. 14-21).
However, Lee, fails to explicitly disclose a function region displaying different capturing options in a single row switching to a material display region wherein the material identifiers are arranged in a single-row. However, the examiner maintains that it was well known in the art to provide this, as taught by Dye.
In a similar field of endeavor, Dye discloses
wherein displaying the captured-image region and the material display region on the first interface in response to the first operation comprises:
switching a capturing interface (fig. 6C) to the first interface (fig. 6E) in response to an interface switching operation (pressing 622; [0228]-[0229]), and displaying the captured-image region (620-1; fig. 6E) and the material display region (624; fig. 6E) on the first interface (fig. 6E);
wherein the capturing interface comprises a function region (Region including SLO-MO, VIDEO, PHOTO, PORTRAIT, and SQUARE in fig. 6C) displaying different capturing options which are arranged in a single-row, and the function region is switched to the material display region in response to the first operation, wherein the at least one material identifier is arranged in a single-row in the material display region (fig. 6E; [0228]-[0229]).
Lee teaches displaying a single capture option in a single row in a capturing interface and then when a user presses the single capture option, displaying a plurality of filter effects in its place in a single row. Dye teaches displaying a plurality of capture options in a single row in a capturing interface and then when a user presses an input, displaying a plurality of effect options in its place in a single row. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the single capture option of Lee with a plurality of capture options as in Dye to achieve the predictable result of allowing a user to generate different types of content using the filters.
Regarding claim 2, Lee and Dye, the combination, discloses everything claimed as applied above (see claim 1), in addition, Lee discloses, wherein the capturing control (1303) is displayed in at least one of the following regions: the captured-image region (figs. 14-21) or the material display region.
Regarding claim 19, Lee discloses An electronic device (fig. 1), comprising:
at least one processor (112; fig. 1); and
a storage apparatus (111; fig. 1), configured to store at least one program,
wherein the at least one program, when executed by the at least one processor, causes the at least one processor ([0023]) to:
display a captured-image region (1310; figs. 14-21) and a material display region (1401-1406; figs. 14-21) on a first interface in response to a first operation (Pressing effect menu button 1302; [0063]), wherein an association relationship exists between the captured-image region and at least one of a size or a display position of the material display region (The position of the effect menu overlaps with the captured image region. This is an “association relationship” as broadly claimed.); and
display a first image (Image of car; fig. 14) in the captured-image region, and displaying, in the material display region, at least one material identifier (Plurality of filter previews; figs. 14-21) corresponding to at least one capturing material (Filter effect), and displaying a capturing control (1303; figs. 14-21) on the first interface, wherein an image content in the first image comprises a framing content of a capturing function corresponding to the first interface (The image is an image being captured by the image sensor which can be captured when camera button 1303 is pressed; [0063]); and
capture based on a first capturing material corresponding to a first material identifier, in response to the capturing control being triggered, wherein the first material identifier is selected (Filter icons are selected and when the camera button 1303 is pressed after selecting the filter icons, an image is capture with the selected filters; [0074]; figs. 14-21).
However, Lee, fails to explicitly disclose a function region displaying different capturing options in a single row switching to a material display region wherein the material identifiers are arranged in a single-row. However, the examiner maintains that it was well known in the art to provide this, as taught by Dye.
In a similar field of endeavor, Dye discloses
wherein the processor causing the device to display the captured-image region and the material display region on the first interface in response to the first operation further causes the device to:
switch a capturing interface (fig. 6C) to the first interface (fig. 6E) in response to an interface switching operation (pressing 622; [0228]-[0229]), and displaying the captured-image region (620-1; fig. 6E) and the material display region (624; fig. 6E) on the first interface (fig. 6E);
wherein the capturing interface comprises a function region (Region including SLO-MO, VIDEO, PHOTO, PORTRAIT, and SQUARE in fig. 6C) displaying different capturing options which are arranged in a single-row, and the function region is switched to the material display region in response to the first operation, wherein the at least one material identifier is arranged in a single-row in the material display region (fig. 6E; [0228]-[0229]).
Lee teaches displaying a single capture option in a single row in a capturing interface and then when a user presses the single capture option, displaying a plurality of filter effects in its place in a single row. Dye teaches displaying a plurality of capture options in a single row in a capturing interface and then when a user presses an input, displaying a plurality of effect options in its place in a single row. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the single capture option of Lee with a plurality of capture options as in Dye to achieve the predictable result of allowing a user to generate different types of content using the filters.
Regarding claim 20, it recites similar limitations to claim 19 and is therefore rejected for the same reasons as stated above (see claim 19).
Claim(s) 4-8, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Dye further in view of Kunishige et al. (US 2012/0307103 A1) hereinafter referenced as Kunishige.
Regarding claim 4, Lee and Dye, the combination, discloses everything claimed as applied above (see claim 1), Lee discloses wherein a second image (Image of car shown in fig. 13) corresponding to the capturing function is displayed on the capturing interface (fig. 13); a size of the first image is [the same as] a size of the second image; and a size ratio of the first image is consistent with a size ratio of the second image (figs. 13-21).
However, the combination, fails to explicitly disclose that the size of the first image is less than a size of the second image. However, the examiner maintains that it was well known in the art to provide this, as taught by Kunishige.
In a similar field of endeavor, Kunishige discloses a size of the first image is less than a size of the second image ([0180]; The size of the live view image is reduced when the effect menu is displayed; fig. 14); and a size ratio of the first image is consistent with a size ratio of the second image (The live view image is the same as the full screen live view image (W100; fig. 8) but reduced in size; [0180]).
The combination teaches displaying a first live view image where no effect menu is displayed and a second live view image where an effect menu is displayed wherein the effect menu is superimposed on the live view image. Kunishige teaches reducing the size of the live view image when an effect menu is displayed instead of superimposing the effect menu. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the superimposition method of the combination with the reduction in size of the live view image in Kunishige to achieve the predictable result of allowing the user to see the entire image in the effect as shown in Kunishige (fig. 14).
Regarding claim 5, Lee, Dye, and Kunishige, the combination, discloses everything claimed as applied above (see claim 4), in addition, Lee discloses, wherein the captured-image region is comprised in a target region (Region not including the effect menu; figs. 14-21); the target region further comprises a control display region (Region including capture button 1303; figs. 14-21).
In addition, Kunishige discloses an association relationship exists between a size of the target region and a size of the material display region (As shown in fig. 14, the size of the live view image (target region) and the effect menu are made to both fit in the screen 21. This is an “association relationship”.).
The combination teaches displaying a first live view image where no effect menu is displayed and a second live view image where an effect menu is displayed wherein the effect menu is superimposed on the live view image. Kunishige teaches reducing the size of the live view image when an effect menu is displayed instead of superimposing the effect menu. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the superimposition method of the combination with the reduction in size of the live view image in Kunishige to achieve the predictable result of allowing the user to see the entire image in the effect as shown in Kunishige (fig. 14).
Regarding claim 6, Lee, Dye, and Kunishige, the combination, discloses everything claimed as applied above (see claim 5), in addition, Lee discloses, wherein a target control (1303; figs. 14-21) is displayed in the control display region; and the target control comprises at least one of a capturing setting control, a material associated control, or the capturing control (1303).
Regarding claim 7, Lee, Dye, and Kunishige, the combination, discloses everything claimed as applied above (see claim 6), in addition, Lee discloses wherein the material associated control (capture button 1303; figs. 14-21) comprises at least one of the following; a material attribute related control or a material-based interaction control (The capture button is used to capture an image having the selected effect which reads on a “material-based interaction” as broadly claimed.)
Regarding claim 8, Lee, Dye, and Kunishige, the combination, discloses everything claimed as applied above (see claim 6), in addition, Lee discloses, wherein a currently displayed target control (1303) in the control display region is associated with the first capturing material (The capturing button 1303 triggers capture of an image with the selected material and therefore is “associated with” the target capturing material as broadly claimed.).
Regarding claim 17, Lee, Dye, and Kunishige, the combination, discloses everything claimed as applied above (see claim 1), in addition, Lee discloses, wherein in response to the first material identifier in the selected state existing in the at least one material identifier, the first image comprises an image obtained after the first capturing material is applied to a captured content (Figs. 15-16; [0067]-[0069]; When a filter is selected, the image is shown with the selected filter on the display.).
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Dye further in view of Kunishige further in view of Cragg et al. (US 2020/0344411 A1) hereinafter referenced as Cragg.
Regarding claim 9, Lee, Dye, and Kunishige, the combination, discloses everything claimed as applied above (see claim 8), in addition, Lee discloses, wherein at least one of a type, a number, or a style of the currently displayed target control (1303; figs. 14-21) in the control display region is associated with the first capturing material (The type of control (capturing button) is “associated with” the target capturing material because capture button is used to capture an image having the selected effect.).
In addition, Cragg discloses wherein at least one of a type, a number, or a style of the currently displayed target control in the control display region is associated with the first capturing material (The type of slider changes according to the selected filter. Specifically, the slider 904 adjusts a parameter for the selected filter; [0070]; fig. 9).
The combination teaches providing a filter menu on a preview screen for an imaging device. Cragg teaches providing a filter menu on a preview screen for an imaging device wherein when a filter is selected an additional slider is provided to adjust a parameter for the selected filter. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to improve the combination by applying the technique of providing a slider based on the selected filter to achieve the predictable result of allowing a user to more finely tune the filter effect on the image to be captured.
Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Dye further in view of Kunishige further in view of Applicant Admitted Prior Art (AAPA).
Regarding claim 18, Lee and Dye, the combination, discloses everything claimed as applied above (see claim 1), in addition, Lee discloses, further comprising:
in response to a triggering operation for the capturing control, performing capturing based on the first capturing material corresponding to the first material identifier (Filter icons are selected and when the camera button 1303 is pressed after selecting the filter icons, an image is capture with the selected filters; [0074]; figs. 14-21).
However, Lee, fails to explicitly disclose the association relationship comprises the captured-image region or the material display region do not have an intersection. However, the examiner maintains that it was well known in the art to provide this, as taught by Kunishige.
In a similar field of endeavor, Kunishige discloses wherein the association relationship exists between the captured-image region and at least one of the size or display position of the material display region comprises: the captured-image region (W100) or the material display region (W101-W104) do not have an intersection (fig. 14).
The combination teaches displaying a first live view image where no filter menu is displayed and a second live view image where an effect menu is displayed wherein the effect menu is superimposed on the live view image. Kunishige teaches reducing the size of the live view image when an effect menu is displayed instead of superimposing the effect menu. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the superimposition method of the combination with the reduction in size of the live view image in Kunishige to achieve the predictable result of allowing the user to see the entire image in the effect as shown in Kunishige (fig. 14).
However, Lee, Dye, and Kunishige, the combination, fails to explicitly disclose determining work to be posted according to a capturing result. However, the examiner maintains that it was well known in the art before the effective filing date of the claimed invention (AIA ) to provide this, as taught by AAPA.
The combination teaches capturing an image with a filter effect. AAPA teaches an interface for allowing a user to select desired images for posting to another source such as social media is well-known. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to improve the combination by applying the technique of providing an interface for a user to select captured images that the user desires to be posted to another source to achieve the predictable result of sharing the images with other people.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL M BERARDESCA whose telephone number is (571)270-3579. The examiner can normally be reached Mon-Thurs 10-8, Fri 10-2.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sinh Tran can be reached at (571)272-7564. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
PAUL M. BERARDESCA
Examiner
Art Unit 2637
/PAUL M BERARDESCA/Primary Examiner, Art Unit 2637 2/21/2026