DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-5, 9, 11-15 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Asakura (JP2012089973A, cited in IDS) in view of Kumakura (JP2009086601A, cited in IDS).
As to claim 1, Asakura discloses an imaging apparatus (Fig.1: camera 1) comprising:
an image sensor (Fig.1: imaging device 21) configured to convert an optical image of a subject input via a focus lens (Fig.1: interchangeable lens unit 50) into an imaging signal ([0036]: “The imaging processing unit 22 is connected to the imaging device 21, and the imaging device 21 converts the subject image formed by the interchangeable lens unit 50 into an image signal”);
an image signal processor (Fig.1: imaging processing unit 22) configured to generate a video signal from the imaging signal ([0036]: “The image sensor processing unit 22 reads an image signal from the image sensor 21, converts it into digital image data”);
a first display (Figs. 1 and 2: rear display unit 29) configured to display the video signal output from the image signal processor (Fig.14(a); [0039]: “The display processing unit 28 causes the rear display unit 29 or the EVF 30 to display a live view display during shooting, a REC view display during release, and a playback image during playback.”);
a second display (Figs.1 and 2: EVF 30) configured to display the video signal output from the image signal processor (Fig. 14(b); [0039]: “The display processing unit 28 causes the rear display unit 29 or the EVF 30 to display a live view display during shooting, a REC view display during release, and a playback image during playback.”), a display size of the second display is smaller than a display of the first display (See Fig.3);
a first touch panel (Figs. 1 and 2: touch panel unit 14) configured to detect a touch operation relating to a photographing function of the imaging apparatus ([0044]: “In this case, when the user wants to adjust the focus or exposure control on the subject 81b, the subject 81b is touched as shown in FIG. When the touch panel unit 14 outputs a touch signal to the subject 81b, the screen information control unit 11c displays the cursor 84 on the rear display unit 29 so as to be superimposed on the subject 81b”);
a processor (Fig.1: control unit 11, imaging processing unit 22 and display processing unit 28) configured to:
control the first display to display the video signal and a video object associated with the photographing function (Figs.3A-3C; [0044]: image data is displayed on the rear display unit 29, and the cursor 84 is superimposed on the subject 81b on the rear display unit 29);
control the second display to display the video signal (Fig.5; [0045]: “live view image or the like is displayed on the EVFLCD 32”) and the video object associated with the photographing function (Fig.5C; [0047]: “When the touch panel unit 14 outputs a detection signal, the screen information control unit 11c displays a cursor 84 on the EVFLCD 32 so as to be superimposed on the subject 81b”),
wherein the processor is further configured to:
display the video object on the first display when the touch operation is detected by the first touch panel ([0044]: “When the touch panel unit 14 outputs a touch signal to the subject 81b, the screen information control unit 11c displays the cursor 84 on the rear display unit 29 so as to be superimposed on the subject 81b”);
display the video object on the second display ([0047]: “the screen information control unit 11c displays a cursor 84 on the EVFLCD 32 so as to be superimposed on the subject 81b”);
cause the image signal processor to generate an encoded image data ([0038]: “The compression/decompression unit 25 is a circuit for compressing still image or continuous image data temporarily stored in the SDRAM 23 by a compression method such as JPEG or TIFF, and decompressing the data for display”); and
store the encoded image data into a storage ([0041]: “The recording/playback unit 26 stores the image data compressed by the compression/decompression unit 25 in the image storage unit 27”).
Asakura fails to disclose a second touch panel configured to detect the touch operation relating to the photographing function of the imaging apparatus; and
the processor is configured to display the video object on the second display when the touch operation is detected by the second touch panel.
However, Kumakura teaches a second touch panel (Fig.5: touch panel 11) configured to detect the touch operation relating to the photographing function of the imaging apparatus; and the touch operation is detected by the second touch panel (Fig.9; [0130]: “If it is determined in S513 that the photographer has their eyes close to the camera, it is determined that the photographer is attempting to take a photograph, and any subsequent operations performed on the touch panel are determined to be operations for setting photographing conditions”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Asakura with the teaching of Kumakura to have a second touch panel configured to detect the touch operation relating to the photographing function of the imaging apparatus; and the processor is configured to display the video object on the second display when the touch operation is detected by the second touch panel, so as to provide an additional touch panel for touch operation when the user is looking into the eyepiece while having limited access to the rear display unit, thereby making the camera easier to operate.
As to claim 2, Asakura in view of Kumakura discloses the imaging apparatus according to claim 1, further comprising a proximity sensor (Asakura: Fig.4: detection unit 15),
wherein the proximity sensor comprises an infrared light emitter (Asakura: [0034]: light projecting unit such as in infrared ray) and an infrared light receiver (Asakura: [0034]: a light receiving unit), and
wherein a status of the imaging apparatus is determined by a signal from the proximity sensor (Asakura: [0028]: “The display switching control unit 11a receives the detection result from the detection unit 15, and performs switching control for displaying on one of the two display units based on the detection result”).
As to claim 3, Asakura in view of Kumakura discloses the imaging apparatus according to claim 1, wherein the video object is a focus frame indicating a focus area of the video signal and is superimposed on the video signal by the image signal processor (Asakura: Figs.3 and 5; [0044] and [0047]: cursor 84 is superimposed on the subject on the rear display unit 29 or EVF 30), and
wherein the photographing function is a function of autofocusing performed by the image signal processor (Asakura: [0043] and [0046]: AF (Auto Focus)).
As to claim 4, Asakura in view of Kumakura discloses the imaging apparatus according to claim 1, wherein the image signal processor is further configured to perform white balance correction to the video signal (Asakura: [0037]: the image processing unit 24 performs various image processing including white balance).
As to claim 5, Asakura in view of Kumakura discloses the imaging apparatus according to claim 1, wherein the first display or the second display is either a liquid crystal display or an organic EL display (Asakura: [0040]: EVF 30 includes an electronic viewfinder liquid crystal (EVF LCD) 32).
As to claim 9, Asakura in view of Kumakura discloses the imaging apparatus according to claim 1, wherein the processor is further configured to:
enable the first touch panel and disable the second touch panel, when the video signal is displayed on the first display (Asakura: [0044]: when the image is displayed on the rear display unit 29, the touch panel unit 14 is used to detect and output a touch signal); and
disable the first touch panel and enable the second touch panel, when the video signal is displayed on the second display (Asakura: [0047] and [0064]: when the image is displayed on the EVF 30, the touch pad operation is enable only in the effective range 14a, see Figs.6B-13B. In the combination of Asakura and Kumakura, the camera of Asakura is modified to have an additional touch panel/a second touch panel as taught in Kumakura, therefore, the combination of Asakura and Kumakura would teach the claimed limitations).
Method claims 11-15 and 19 recite substantially similar subject matter as disclosed in claims 1-5 and 9, respectively; therefore, they are rejected for the same reasons.
Claim(s) 6 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Asakura (JP2012089973A, cited in IDS) in view of Kumakura (JP2009086601A, cited in IDS) as applied to claim 1 above, and further in view of Moyal et al. (US 2020/0012365 A1).
As to claim 6, Asakura in view of Kumakura discloses the imaging apparatus according to claim 1, but fails to disclose wherein the first touch panel or the second touch panel comprises a plurality of capacitive sensors which are arranged two-dimensionally.
However, Moyal et al. teaches the first touch panel or the second touch panel comprises a plurality of capacitive sensors which are arranged two-dimensionally (Fig.2: capacitive sensor array).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Asakura and Kumakura with the teaching of Moyal et al. such that the first touch panel or the second touch panel comprises a plurality of capacitive sensors which are arranged two-dimensionally, so as to provide a touch panel/display with multi-touch support, high sensitivity and better visual quality, thereby providing a fast and smooth user experience.
Method claim 16 recites substantially similar subject matter as disclosed in claim 6; therefore, it is rejected for the same reasons.
Claim(s) 7 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Asakura (JP2012089973A, cited in IDS) in view of Kumakura (JP2009086601A, cited in IDS) as applied to claim 1 above, and further in view of Leleannec et al. (US 2017/0374390 A1).
As to claim 7, Asakura in view of Kumakura discloses the imaging apparatus according to claim 1, wherein the encoded image data is in JPEG format when the encoded image data is still image (Asakura: [0038]: the compression/decompression unit 25 compresses still image by a compression method such as JPEG).
The above combination of Asakura and Kumakura fails to disclose the encoded image data is in H.264 or H.265 format when the encoded image data is a moving image.
However, Leleannec et al. teaches the encoded image data is in H.264 or H.265 format when the encoded image data is a moving image ([0053]).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Asakura and Kumakura with the teaching of Leleannec et al. such that the encoded image data is in H.264 or H.265 format when the encoded image data is a moving image, so as to provide widespread compatibility.
Method claim 17 recites substantially similar subject matter as disclosed in claim 7; therefore, it is rejected for the same reasons.
Claim(s) 8 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Asakura (JP2012089973A, cited in IDS) in view of Kumakura (JP2009086601A, cited in IDS) as applied to claim 1 above, and further in view of Jeon et al. (US 2012/0105579 A1, cited in IDS).
As to claim 8, Asakura in view of Kumakura discloses the imaging apparatus according to claim 1, wherein the video image being decoded from the encoded image data by a decoder (Asakura: [0038]: the compression/decompression unit 25 decompresses the data for display).
The above combination fails to disclose the processor is further configured to:
display, on the first display, a thumbnail image corresponding to the encoded image data; and
upon detection of a touch operation to the thumbnail image, display a video image on the first display.
However, Jeon et al. teaches displaying, on the first display, a thumbnail image corresponding to the encoded image data (Figs.15; [0258]: images are displayed as thumbnails on the screen); and upon detection of a touch operation to the thumbnail image, display a video image on the first display (Figs.15B and 15C; [0260]: the image selected in Fig.15B is displayed on the screen as shown in Fig.15C).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Asakura and Kumakura with the teaching of Jeon et al. to display, on the first display, a thumbnail image corresponding to the encoded image data; and upon detection of a touch operation to the thumbnail image, display a video image on the first display, so as to provide better user experience.
Method claim 18 recites substantially similar subject matter as disclosed in claim 8; therefore, it is rejected for the same reasons.
Claim(s) 10 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Asakura (JP2012089973A, cited in IDS) in view of Kumakura (JP2009086601A, cited in IDS) as applied to claim 1 above, and further in view of Teraoka et al. (US 2017/0251187 A1).
As to claim 10, Asakura in view of Kumakura discloses the imaging apparatus according to claim 1, but fails to disclose wherein the processor is further configured to turn off the first display or a backlight of the first display, when a touch operation is not detected by the first touch panel for a predetermined period.
However, Teraoka et al. teaches turning off the first display or a backlight of the first display, when a touch operation is not detected by the first touch panel for a predetermined period ([0092]: “when not receiving any touch input (such as detecting an object) over a preset period (e.g., about 10 minutes), transits to the power-saving mode (sleep mode), thereby turning off the display light source”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Asakura and Kumakura with the teaching of Teraoka et al. to turn off the first display or a backlight of the first display, when a touch operation is not detected by the first touch panel for a predetermined period, so as to reduce power consumption.
Method claim 20 recites substantially similar subject matter as disclosed in claim 10; therefore, it is rejected for the same reasons.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHENZHEN WU whose telephone number is (571)272-2519. The examiner can normally be reached 8:30 am - 5:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SINH TRAN can be reached at (571)272-7564. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ZHENZHEN WU/Examiner, Art Unit 2637
/SINH TRAN/Supervisory Patent Examiner, Art Unit 2637