DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims [1+2+4, 3, 5-7, 8+9+11, 10 and 12-15] are rejected on the ground of nonstatutory double patenting as being unpatentable over claims [ 1, 2,5 -9 and 12-15] of U.S. Patent No [ 11,558,545] Although the claims at issue are not identical, they are not patentably distinct from each other because Claims [1+2+4, 3, 5, 8+9+11, 10 and 12-15] of the current application are an obvious variant and encompassed by claims [ 1, 2,5 -9 and 12-15] of U.S. Patent No [ 11,558,545].
Claims [1+2, 4+5, 8+9 and 15] are also rejected on the ground of nonstatutory double patenting as being unpatentable over claims [ 1, 3-4, 6 and 11] of U.S. Patent No [ 12, 142, 021]. Although the claims at issue are not identical, they are not patentably distinct from each other because Claims [1+2, 4+5, 8+9 and 15] of the current application are an obvious variant and encompassed by claims [ 1, 3-4, 6 and 11] of U.S. Patent No [ 12, 142, 021].
Examiner note I: in the above Claim [a +b] signifies the addition of scope of the given claims
Below are the tables showing the conflicting claims.
US. 18/912,706
US. PAT. No. 11,558,545
1. An electronic apparatus comprising: a camera; a processor configured to control the camera; and a memory stores at least one instruction, wherein the processor is configured, by executing the at least one instruction, to: obtain a plurality of image frames, identify a best image frame from among the plurality of image frames by applying the plurality of image frames to a network model, and provide the identified best image frame, wherein the network model recognizes a plurality of features from the plurality of image frames, and identifies the best image frame based on a first feature recognized in a first image frame among the plurality of image frames, and wherein the first feature comprises a facial recognition feature and a body feature of a person recognized in the first image. 2. The electronic apparatus as claimed in claim 1, wherein the plurality of image frames include image frames obtained by the camera during a predetermined first time period before a user selection of a user interface button is received and image frames obtained by the camera during a predetermined second time period after the user selection is received. 4. The electronic apparatus as claimed in claim 3, wherein the second feature comprises a facial recognition feature and a body feature of a person recognized in the second image frame.
1. An electronic apparatus comprising: a camera; a processor configured to control the camera; and a memory configured to be electrically connected to the processor and to store at least one network model for recognizing a plurality of predetermined feature information in an input image frame, wherein the memory stores at least one instruction, wherein the processor is configured, by executing the at least one instruction, to: identify a best image frame from among the a_plurality of image frames, based on a first feature information recognized by the at least one network model in the best image frame, and provide the identified best image frame, and wherein the plurality of predetermined feature information comprises a facial recognition feature and a body feature of a person, and wherein the plurality of image frames include image frames obtained by the camera during a predetermined first time period before a user selection of a user interface button to capture an image is received and image frames obtained by the camera during a predetermined second time period after the user selection is received.
3. The electronic apparatus as claimed in claim 1, wherein the processor is further configured to execute the at least one instruction to: identify another best image frame from among the plurality of image frames, wherein the network model identifies the other best image frame based on a second feature recognized in a second image frame among the plurality of image frames, and provide the identified best image frame and the identified other best image frame on a display of the electronic apparatus.
2. The electronic apparatus as claimed in claim 1, wherein the processor is further configured to execute the at least one instruction to: identify another best image frame from among the plurality of image frames, based on a second feature information recognized by the at least one network model in the other best image frame, and provide the identified best image frame and the identified other best image frame on a display of the electronic apparatus.
5. The electronic apparatus as claimed in claim 1, wherein the body feature is a feature indicative of a jumping movement of a body.
5. The electronic apparatus as claimed in claim 1, wherein the body feature is a feature indicative of a jumping movement of a body.
6. The electronic apparatus as claimed in claim 2, wherein the processor is further configured to execute the at least one instruction to: control the camera to stop capturing in response to the predetermined second time period elapsing.
6. The electronic apparatus as claimed in claim 1, wherein the processor is further configured to execute the at least one instruction to: control the camera to stop capturing in response to the predetermined second time period elapsing.
7. The electronic apparatus as claimed in claim 2, wherein the processor is further configured to execute the at least one instruction to: control the camera to capture the plurality of image frames without notifying a user that recording of plural frames has started.
7. The electronic apparatus as claimed in claim 1, wherein the processor is further configured to execute the at least one instruction to control the camera to capture the plurality of image frames without notifying a user that recording of plural frames has started.
8. A controlling method of an electronic apparatus comprising a camera, the controlling method comprising: capturing, via the camera, a plurality of image frames; identifying a best image frame from among the plurality of image frames by applying the plurality of image frames to a network model; and providing the identified best image frame; and wherein the network model recognizes a plurality of features from the plurality of image frames, and identifies a first image frame as the best image frame based on a first feature recognized in the first image frame among the plurality of image frames, and wherein the first feature comprises a facial recognition feature and a body feature of a person recognized in the first image.
9. The controlling method as claimed in claim 8, wherein the plurality of image frames include image frames obtained by the camera during a predetermined first time period before a user selection of a user interface button is received and image frames obtained by the camera during a predetermined second time period after the user selection is received. 11. The controlling method as claimed in claim 10, wherein the second feature comprises a facial recognition feature and a body feature of a person recognized in the second image frame.
8. A controlling method of an electronic apparatus comprising a camera, the controlling method comprising: capturing, via the camera, a plurality of image frames; identifying a best image frame from among the plurality of image frames, based on a first feature information recognized in the best image frame by at least one network model for recognizing a plurality of predetermined feature information in an input image frame; and providing the identified best image frame, wherein the plurality of predetermined feature information comprises a facial recognition feature and a body feature of a person, and wherein the plurality of image frames include image frames obtained by the camera during a predetermined first time period before a user selection of a user interface button to capture an image is received and image frames obtained by the camera during a predetermined second time period after the user selection is received.
10. The controlling method as claimed in claim 8, further comprising: identifying another best image frame from among the plurality of image frames; providing the identified best image frame and the identified other best image frame on a display of the electronic apparatus; and wherein the network model identifies the other best image frame based on a second feature recognized in a second image frame among the plurality of image frames.
9. The controlling method as claimed in claim 8, further comprising: identifying another best image frame from among the plurality of image frames, based on a second feature information recognized by the at least one network model in the other best image frame, wherein the providing the identified best image frame comprises providing the identified best image frame and the identified other best image frame on a display of the electronic apparatus.
12. The controlling method as claimed in claim 8, wherein the body feature is a feature indicative of a jumping movement of a body.
12. The controlling method as claimed in claim 8, wherein the body feature is a feature indicative of a jumping movement of a body.
13. The controlling method as claimed in claim 9, further comprising: controlling the camera to stop capturing in response to the predetermined second time period elapsing.
13. The controlling method as claimed in claim 8, wherein the capturing the plurality of image frames comprises: controlling the camera to stop capturing in response to the predetermined second time period elapsing.
14. The controlling method as claimed in claim 9, further comprising: controlling the camera to capture the plurality of image frames without notifying a user that recording of plural frames has started.
14. The controlling method as claimed in claim 8, wherein the capturing the plurality of image frames comprises controlling the camera to capture the plurality of image frames without notifying a user that recording of plural frames has started.
15. A non-transitory computer-readable recording medium having recorded therein instructions executable by a processor to perform a controlling method of an electronic apparatus comprising a camera, the controlling method comprising: capturing, via the camera, a plurality of image frames; identifying a best image frame from among the plurality of image frames by applying the plurality of image frames to a network model; and providing the identified best image frame; and wherein the network model recognizes a plurality of features from the plurality of image frames, and identifies a first image frame as the best image frame based on a first feature recognized in the first image frame among the plurality of image frames, and wherein the first feature comprises a facial recognition feature and a body feature of a person recognized in the first image
. 15. A non-transitory computer-readable recording medium having recorded therein instructions executable by a processor to perform a controlling method of an electronic apparatus comprising a camera, the controlling method comprising: capturing, via the camera, a plurality of image frames; identifying a best image frame from among the plurality of image frames, based on a first feature information recognized in the best image frame by at least one network model for recognizing a plurality of predetermined feature information in an input image frame; and providing the identified best image frame, wherein the plurality of predetermined feature information comprises a facial recognition feature and a body feature of a person, and wherein the plurality of image frames include image frames obtained by the camera during a predetermined first time period before a user selection of a user interface button to capture an image is received and image frames obtained by the camera during a predetermined second time period after the user selection is received.
US. 18/912,706
US. PAT. No. 12,142,021
1. An electronic apparatus comprising: a camera; a processor configured to control the camera; and a memory stores at least one instruction, wherein the processor is configured, by executing the at least one instruction, to: obtain a plurality of image frames, identify a best image frame from among the plurality of image frames by applying the plurality of image frames to a network model, and provide the identified best image frame, wherein the network model recognizes a plurality of features from the plurality of image frames, and identifies the best image frame based on a first feature recognized in a first image frame among the plurality of image frames, and wherein the first feature comprises a facial recognition feature and a body feature of a person recognized in the first image. 2. The electronic apparatus as claimed in claim 1, wherein the plurality of image frames include image frames obtained by the camera during a predetermined first time period before a user selection of a user interface button is received and image frames obtained by the camera during a predetermined second time period after the user selection is received.
1. An electronic apparatus comprising: a camera; a memory storing one or more instructions; and at least one processor configured to execute the one or more instructions stored in the memory, wherein the at least one processor, by executing the one or more instructions, is further configured to: obtain feature information corresponding to each of a plurality of image frames using the at least one network model, identify one of the plurality of image frames as a best image frame, based on feature information corresponding to the best image frame recognized by the at least one network model, and provide the identified best image frame, wherein the at least one network model is a model trained to output feature information corresponding to an input image frame based on facial expression feature and body feature of a person included in the input image frame, and wherein the plurality of image frames include image frames obtained by the camera during a predetermined first time period before a user selection of a user interface button to capture an image is received and image frames obtained by the camera during a predetermined second time period after the user selection is received.
3. The electronic apparatus as claimed in claim 1, wherein the processor is further configured to execute the at least one instruction to: identify another best image frame from among the plurality of image frames, wherein the network model identifies the other best image frame based on a second feature recognized in a second image frame among the plurality of image frames, and provide the identified best image frame and the identified other best image frame on a display of the electronic apparatus.
3. The electronic apparatus as claimed in claim 1, wherein the at least one processor is further configured to execute the one or more instructions to: identify another best image frame from among the plurality of image frames, based on feature information corresponding to other best image frame recognized by the at least one network model, and provide the identified best image frame and the identified other best image frame on a display of the electronic apparatus.
. 4 The electronic apparatus as claimed in claim 3, wherein the second feature comprises a facial recognition feature and a body feature of a person recognized in the second image frame. . 5 The electronic apparatus as claimed in claim 1, wherein the body feature is a feature indicative of a jumping movement of a body
4. The electronic apparatus as claimed in claim 1, wherein the facial expression feature is a feature indicative of a smiling of the person, and wherein the body feature is a feature indicative of a jumping movement of a body.
8. A controlling method of an electronic apparatus comprising a camera, the controlling method comprising: capturing, via the camera, a plurality of image frames; identifying a best image frame from among the plurality of image frames by applying the plurality of image frames to a network model; and providing the identified best image frame; and wherein the network model recognizes a plurality of features from the plurality of image frames, and identifies a first image frame as the best image frame based on a first feature recognized in the first image frame among the plurality of image frames, and wherein the first feature comprises a facial recognition feature and a body feature of a person recognized in the first image. 9. The controlling method as claimed in claim 8, wherein the plurality of image frames include image frames obtained by the camera during a predetermined first time period before a user selection of a user interface button is received and image frames obtained by the camera during a predetermined second time period after the user selection is received.
6. A controlling method of an electronic apparatus comprising a camera, the controlling method comprising: obtaining feature information corresponding to each of a plurality of image frames using at least one network model, identifying one of the plurality of image frames as a best image frame, based on feature information corresponding to the best image frame recognized by the at least one network model, and providing the identified best image frame, wherein the at least one network model is a model trained to output feature information corresponding to an input image frame based on facial expression feature and body feature of a person included in the input image frame, and wherein the plurality of image frames include image frames obtained by the camera during a predetermined first time period before a user selection of a user interface button to capture an image is received and image frames obtained by the camera during a predetermined second time period after the user selection is received.
15. A non-transitory computer-readable recording medium having recorded therein instructions executable by a processor to perform a controlling method of an electronic apparatus comprising a camera, the controlling method comprising: capturing, via the camera, a plurality of image frames; identifying a best image frame from among the plurality of image frames by applying the plurality of image frames to a network model; and providing the identified best image frame; and wherein the network model recognizes a plurality of features from the plurality of image frames, and identifies a first image frame as the best image frame based on a first feature recognized in the first image frame among the plurality of image frames, and wherein the first feature comprises a facial recognition feature and a body feature of a person recognized in the first image.
11. A non-transitory computer readable recording medium including a program for executing a method of controlling an electronic apparatus comprising a camera, the method comprising: obtaining feature information corresponding to each of a plurality of image frames using at least one network model, identifying one of the plurality of image frames as a best image frame, based on feature information corresponding to the best image frame recognized by the at least one network model, and providing the identified best image frame, wherein the at least one network model is a model trained to output feature information corresponding to an input image frame based on facial expression feature and body feature of a person included in the input image frame, and wherein the plurality of image frames include image frames obtained by the camera during a predetermined first time period before a user selection of a user interface button to capture an image is received and image frames obtained by the camera during a predetermined second time period after the user selection is received.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
. Claim(s) [1, 3-5, 8, 10-12 and 15 ]] is/are rejected under 35 U.S.C. 103 as being unpatentable over Sinha (US. 2019/0180109) in view of Desnoyer (US. 2016/0188997).
Re Claim[1], Sinha discloses an electronic apparatus (see 8 fig. 1) comprising: a camera (see ¶0035, digital cameras); a processor configured to control the camera (see 24 fig. 8, by the virtue of controlling the network interface 29 as described in the text of ¶0035); and a memory (27 fig. 8) stores at least one instruction (see ¶0034, the memory 27, which may include ROM or flash memory (neither shown), and RAM (not shown), as previously noted. The RAM is generally the main memory into which the operating system and application programs are loaded), wherein the processor (24) is configured, by executing the at least one instruction (see fig. 7) to obtain a plurality of image frames (see step 710 fig. 7),
identify a best image frame from among the plurality of image frames (see step 780 fig. 7) by applying the plurality of image frames to a network model(see ¶0020, The machine learning algorithm may be trained on a data set that includes the images that were scored by the human scorers (e.g., the image frames have label data), and provide the identified best image (see step 790 fig. 7), wherein the network model recognizes a plurality of features from the plurality of image frames (see ¶0020, the machine learning algorithm may be trained on a dataset that includes image frames having the highest probability of being a good image (e.g., the image is more often than not selected as being the better image in pairwise comparisons) to identify features associated with desirable images), and identifies the best image frame based on a first feature recognized in a first image frame among the plurality of image frames (see step 790 fig. 7 and ¶0032, FIG. 7 is an example of the disclosed process for scoring image frames by the model), and
Sinha doesn’t seem to explicitly discloses the first feature comprises a facial recognition feature and a body feature of a person recognized in the first image.
Nonetheless in the same field of endeavor Desnoyer discloses an image processing system as Sinha (see for example Desnoyer fig. 9A). Desnoyer further discloses a first feature comprises a facial recognition feature and a body feature of a person recognized in the first image (see ¶00135, whether faces or people are depicted, and if so, their location, relative size, or orientation in the image; whether any depicted objects are in motion; contrast and blur values; or any other suitable feature that may be determined from the image data).
Hence it would have been obvious to one of ordinary skill in the art to have been motivated to modify Sinha before the effective filling date of the claimed invention by the teachings of Desnoyer since this would allow to evaluate each region of an image.
Reclaim [3], Sinha as modified further discloses, wherein the processor is further configured to execute the at least one instruction to: identify another best image frame from among the plurality of image frames, wherein the network model identifies the other best image frame based on a second feature recognized in a second image frame among the plurality of image frames, and provide the identified best image frame and the identified other best image frame on a display of the electronic apparatus (see Sinha fig. 7, all steps and ¶0032, FIG. 7 is an example of the disclosed process for scoring image frames by the model, computing distance, and applying a suppressor curve. The disclosed functions may be stored, for example, on a computer readable medium that are read by a processor as a series of instructions, [by the virtue of selecting another best image based on a scoring model, since the steps as depicted in fig 7 are performed for every video capture]).
Reclaim [4], Sinha as modified further discloses, wherein the second feature comprises a facial recognition feature and a body feature of a person recognized in the second image frame. (see Desnoyer ¶00135, whether faces or people are depicted, and if so, their location, relative size, or orientation in the image; whether any depicted objects are in motion; contrast and blur values; or any other suitable feature that may be determined from the image data [ the claim language doesn’t explicitly require the second feature to be different from the first feature, the second feature can a face feature of another subject]).
Reclaim [5], Sinha as modified further discloses wherein the body feature is a feature indicative of a jumping movement of a body (see Desnoyer ¶ 0059, In particular embodiments, the scoring engine 314 may calculate the valence scores for image frames based on actions depicted in the image frames. As an example and not by way of limitation, certain actions such as running, jumping, etc. may be considered favorable).
Reclaim [8], except changes in wording claim 28 has substantially same limitation as claim [1] above thus analyzed and rejected by the same reasoning.
Reclaim [10], claim [10] has substantially same limitation as claim [3] above thus analyzed and rejected by same reasoning.
Reclaim [11], claim [11] has substantially same limitation as claim [4] above thus analyzed and rejected by same reasoning.
Reclaim [12], claim [12] has substantially same limitation as claim [5] above thus analyzed and rejected by same reasoning.
Reclaim[15], is a program implicit to a function of claim [1] and a step of claim [8] , and thus analyzed and rejected by the same reasoning.
Examiner note II: claims [2, 6-7, 9 and 13-14] are only rejected based on the above obvious double rejection, no art has been found that reasonably address the scope of their limitation.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Van (US. Pat. No. 9, 020, 244) discloses: the trained machine learning engine generates model-generated scores for the images. To select a representative image for a particular video item, candidate images for that particular video item may be ranked based on their model-generated scores, and the candidate image with the top model-generated score may be selected as the representative image for the video item. In col. 3 lines 12-19.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHMED A BERHAN whose telephone number is (571)270-5094. The examiner can normally be reached 9:00Am-5:00pm (MAX- Flex).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Twyler Haskins can be reached at 571-272-7406. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AHMED A BERHAN/Primary Examiner, Art Unit 2639