DETAILED ACTION
Notice of Pre-AIA or AIA Status.
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. Claims 1-14 filed on 03/26/2024 are pending and being examined. Claims 1, 13, and 14 are independent form.
Priority
3. Acknowledgment is made of applicant's claim for foreign priority under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
Claim Rejections - 35 USC § 102
4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
5. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
6. Claims 1-3, 5-6, 9-10, and 12-14 are rejected under 35 U.S.C. 102(a)(1)/102(a)(2) as being anticipated by Murata et al (US 2021/0058552, hereinafter “Murata”).
Regarding claim 1, Murata discloses an electronic apparatus comprising: at least one processor causing the electronic apparatus (the digital camera shown by figs.1-2) to act as:
an operation unit configured to output a first operation instruction instructing preparation for shooting, and a second operation instruction instructing recording of a captured image (wherein the shutter switch 102 of the digital camera has two states: the first shutter 102a which is turned the image capturing preparation instruction and generates SW1 signal (i.e., “the first operation instruction”) and the shutter 102b which is turned the image capturing and recording instruction and generates SW2 signal (i.e., “the second operation instruction”); see S503—S508 fig,2, para.45 (or para.96), and para.46 (or para.97));
a storage unit configured to temporarily retain image data output from an imaging unit while the first operation instruction is being output (see para.73: “The compressed raw image data may be temporarily stored and buffered in the memory 134”);
an identification unit configured to identify a subject (wherein the parameters which determine and control the image capturing and recording process (i.e., the SW2 signal is input) include “a face recognition result” based on an image captured from a target/person; see para.51; also see “recognizing object information such as a face or a person in image information” in para.70);
a control unit configured to, while the first operation instruction is being output, execute control to capture the image data by the imaging unit and retain the image data in the storage unit (see para.46: “Based on the second shutter switch signal SW2, the system control unit 132 starts a series of operations of an image capturing process from the reading of a signal from the sensor unit 106 to the writing of a captured image as an image file to the recording medium 200” ) based on identification information of the identification unit (wherein the parameters which determine and control the image capturing and recording process (i.e., the SW2 signal is input) include “a face recognition result” based on an image captured from a target/person; see para.51; also see “recognizing object information such as a face or a person in image information” in para.70; and
a recording unit configured to, when the second operation instruction is output, record, onto a recording medium, image data of an image captured by the imaging unit in response to the second operation instruction, and image data retained in the storage unit at a timing closest to when the second operation instruction is output (see para.46: “Based on the second shutter switch signal SW2, the system control unit 132 starts a series of operations of an image capturing process from the reading of a signal from the sensor unit 106 to the writing of a captured image as an image file to the recording medium 200”).
Regarding claim 2, Murata discloses the electronic apparatus according to Claim 1, wherein the identification unit is configured to identify the subject based on an output from the imaging unit (see para.51: wherein the parameters which determine and control the image capturing and recording process (i.e., the SW2 signal is input) include “a face recognition result” based on an image captured from a target/person).
Regarding claim 3, Murata discloses the electronic apparatus according to Claim 2, wherein a cycle of subject identification by the identification unit is set in accordance with a driving mode of the imaging unit (ibid.).
Regarding claim 5, Murata discloses the electronic apparatus according to Claim 1, wherein, when the subject is identified as a moving subject by the identification unit, the control unit is configured to execute control to capture the image data by the imaging unit and retain the image data in the storage unit while the first operation instruction is being output (e.g., see para.70: “The detection processing unit 310 has the function of detecting and recognizing object information such as a face or a person in image information. For example, the detection processing unit 310 detects a face in a screen represented by the image information, and if there is a face in the screen, stores information indicating the position of the face in the memory 134. The system control unit 132 authenticates a particular person based on feature information regarding the face stored in the memory 134. Display information indicating the calculated evaluation values and the detection and recognition results may be output to the display processing unit 309 and displayed with the live view image”).
Regarding claim 6, Murata discloses the electronic apparatus according to Claim 1, wherein the control unit is configured to set a number of shots based on identification information of the identification unit (ibid.).
Regarding claim 9, Murata discloses the electronic apparatus according to Claim 1, wherein the identification unit is configured to identify the subject based on an output from a sensor different from the imaging unit (wherein the detection/recognition unit 310 performs the face recognition based on the output of the sensor correction process unit 302 rather than the sensor unit 106; see fig.3).
Regarding claim 10, Murata discloses the electronic apparatus according to claim 9, wherein the sensor is an event sensor configured to output only information on a pixel that has had a luminance change (see the sensor unit 106 of figs.2/3 and the corresponding paragraphs).
Regarding claim 12, Murata discloses the electronic apparatus according to Claim 1, further comprising: a communication unit configured to communicate with an external device, wherein, among a plurality of the electronic apparatuses, when an electronic apparatus acting as a master enables a pre-capture function, the electronic apparatus acting as a master issues, through the communication unit, a command to enable the pre-capture function to the remaining electronic apparatuses acting as slaves (see fig.2 and para. 33: “If the digital camera 100 is connected to an external display device (external apparatus) via a communication unit 109, and the output function to the external apparatus is enabled, the live view function may be executed using the external apparatus (live view output process). In this case, a main engine 140 acquires image data from the front engine 130, generates display image data, and controls the display unit 101 and the EVF 108.”).
Regarding claims 13, 14, each of which is an inherent variation of claim 1, thus it is interpreted and rejected for the reasons set forth above in the rejection of claim 1.
Claim Rejections - 35 USC § 103
7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
7-1. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Murata in view of Bigioi et al (US 7916971, hereinafter “Bigioi”).
Regarding claim 4, Murata does not disclose, wherein identification of the subject includes detection of a smile of the subject. However, it is obvious. In the same field of endeavor, Bigioi teaches an image processing apparatus which could postpone acquisition of the image until every person in the image is smiling. See Co.2., lines 55-63: “The analyzing of facial regions may include applying an Active Appearance Model (AAM) to each facial region, and analyzing AAM parameters for each facial region to provide an indication of facial expression, and/or analyzing each facial region for contrast, sharpness, texture, luminance levels or skin color or combinations thereof, and/or analyzing each facial region to determine if an eye of the facial region is closed, if a mouth of the facial region is open and/or if a mouth of the facial region is smiling.” It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was made to incorporate the teachings of Bigioi into the teachings of Murata. Suggestion or motivation for doing so would have been to acquire a desired smile face image. Therefore, the claim is unpatentable over Murata in view of Bigioi.
7-2. Claim 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Murata in view of Takeuchi et al (US 2010/0149361, hereinafter “Takeuchi”).
Regarding claims 7 and 8, Murata does not disclose, wherein identification of the subject includes a vehicle (claim 7) or an animal (claim 8). However, in the same field of endeavor, Takeuchi teaches that the subject/object captured by the live view images may include moving subject, such as a vehicle shown in fig.6. See fig.6 and para.161. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was made to incorporate the teachings of Takeuchi into the teachings of Murata and utilize the camera of Murata to take pictures for moving targets such as a vehicle (claim 7) and an animal (claim 8). Suggestion or motivation for doing so would have been to “provide an image evaluation apparatus and camera which are capable of evaluating an image which is comprehensively good”. See abstract. Therefore, the claims are unpatentable over Murata in view of Takeuchi.
7-3. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Murata in view of Siminoff (US 10939120, hereinafter “Siminoff”).
Regarding claim 11, Murata does not disclose, wherein the sensor is a thermosensor for detecting a temperature of the subject. However, it is a well-known and widely used technique in the field of digital cameras. As evidence, in the same field of endeavor, Siminoff teaches a method for face recognition “by using thermal cameras, which may only detect the shape of the head and ignore the subject accessories such as glasses, hats, or make up”. See Co. 18, lines 6-9. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was made to incorporate the teachings of Siminoff into the teachings of Murata. Suggestion or motivation for doing so would have been to “only detect the shape of the head and ignore the subject accessories such as glasses, hats, or make up”. See Co. 18, lines 6-9. Therefore, the claim is unpatentable over Murata in view of Siminoff.
Conclusion
8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RUIPING LI whose telephone number is (571)270-3376. The examiner can normally be reached 8:30am--5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, HENOK SHIFERAW can be reached on (571)272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit https://patentcenter.uspto.gov; https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center, and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RUIPING LI/Primary Examiner, Ph.D., Art Unit 2666