DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2 and 6-7 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by ALCHERA INC. (KR 102144975) cited in applicant submitted IDS dated 01/17/2024 (Translation KR 102144975B1.pdf).
As to claim 1, ALCHERA INC. (KR 102144975) discloses a method of generating a neural network model by using a video (see par 0006, i.e., “the machine learning system may further include an image extraction module that receives video data as input, extracts a first extracted image and a second extracted image from the video data, and provides the extracted images to a first similarity judgment module and a machine learning model), the method comprising: determining an image similarity between consecutive frames from among a plurality of frames included in the video (see pars 0006-0007, i.e., “the first extracted image and the second extracted image may correspond to consecutive image frames among the video data, and see par 0034 “a machine learning system that learns video data, and may include an image extraction module (10), a first similarity judgment module (20), a machine learning model (30), a learning module (32), a second similarity judgment module (40), and a learning data selection module (50), see par 36, video data may contain multiple image frames, par 0039 “first extracted image (I1) and the second extracted image (I2) correspond to consecutive image frames in the video data”); generating training frame data by excluding at least one of the consecutive frames, when the image similarity is equal to or greater than a threshold value (see pars 0074-0080, i.e., “The learning data selection module (50) can select learning data for training the machine learning model (30) based on the first similarity (S(I)) and the second similarity (S(O)). Specifically, the learning data selection module (50) can select learning data for training the machine learning model (30) by comparing the first similarity (S(I)) and the second similarity (S (O)) with a predefined threshold”, “the learning data selection module (50) can select at least one of the first extracted image (I1) and the second extracted image (I2) as learning data when the first similarity (S(I)) is greater than or equal to a predefined first threshold (T(I)) and the second similarity (S(O)) is less than a predefined second threshold (T (O)), i.e., when selecting first similarity is greater than a first threshold, the second similarity less than second threshold is excluded, and see par 0007, i.e., consecutive image frames among the video data, par 36, i.e., video data may contain multiple image frames that are consecutively connected, and par 39, the first extracted image (I1) and the second extracted image (I2) correspond to consecutive image frames in the video data); and generating the neural network model based on the training frame data (see pars 3, 5, 19, selecting learning data for training the machine learning model based on the first similarity and the second similarity, 26 and 36-38).
As to claim 2, ALCHERA INC. (KR 102144975) discloses wherein the generating of the training frame data comprises setting the threshold value in response to a user command (see par 0014, i.e., “the first threshold and the second threshold may be selected from the first threshold set and the second threshold set, based on user input”, see par 0018, i.e., “the first threshold and the second threshold may be selected from the first threshold set and the second threshold set, based on user input”, and see pars 0084-0085).
As to claim 6, ALCHERA INC. (KR 102144975) discloses a computer device (see par 32, i.e., computer or computing device) comprising: a memory (see paragraph 0031, i.e., a memory) in which a video, a neural network model, and training frame data are stored (see par 0031, i.e., the machine learning system (1) includes a processor and a memory, see par 0006, i.e., “the machine learning system may further include an image extraction module that receives video data as input, extracts a first extracted image and a second extracted image from the video data, and provides the extracted images to a first similarity judgment module and a machine learning model, and see pars 3, 5, 19, selecting learning data for training the machine learning model based on the first similarity and the second similarity, 26 and 36-38); and a processor (see par 31, i.e., a processor) configured to determine an image similarity between consecutive frames from among a plurality of frames included in the video (see pars 0006-0007, i.e., “the first extracted image and the second extracted image may correspond to consecutive image frames among the video data, and see par 0034 “a machine learning system that learns video data, and may include an image extraction module (10), a first similarity judgment module (20), a machine learning model (30), a learning module (32), a second similarity judgment module (40), and a learning data selection module (50), see par 36, video data may contain multiple image frames, par 0039 “first extracted image (I1) and the second extracted image (I2) correspond to consecutive image frames in the video data”), generate the training frame data by excluding at least one of the consecutive frames when the image similarity is equal to or greater than a threshold value (see pars 0074-0080, i.e., “The learning data selection module (50) can select learning data for training the machine learning model (30) based on the first similarity (S(I)) and the second similarity (S(O)). Specifically, the learning data selection module (50) can select learning data for training the machine learning model (30) by comparing the first similarity (S(I)) and the second similarity (S (O)) with a predefined threshold”, “the learning data selection module (50) can select at least one of the first extracted image (I1) and the second extracted image (I2) as learning data when the first similarity (S(I)) is greater than or equal to a predefined first threshold (T(I)) and the second similarity (S(O)) is less than a predefined second threshold (T (O)), i.e., when selecting first similarity is greater than a first threshold, the second similarity less than second threshold is excluded, and see par 0007, i.e., consecutive image frames among the video data, par 36, i.e., video data may contain multiple image frames that are consecutively connected, and par 39, the first extracted image (I1) and the second extracted image (I2) correspond to consecutive image frames in the video data), and generating the neural network model based on the training frame data (see pars 3, 5, 19, selecting learning data for training the machine learning model based on the first similarity and the second similarity, 26 and 36-38).
As to claim 7, ALCHERA INC. (KR 102144975) discloses wherein the processor (see par 31, i.e., a processor) is further configured to set the threshold value in response to a user command (see par 0014, i.e., “the first threshold and the second threshold may be selected from the first threshold set and the second threshold set, based on user input”, see par 0018, i.e., “the first threshold and the second threshold may be selected from the first threshold set and the second threshold set, based on user input”, and see pars 0084-0085).
Allowable Subject Matter
Claims 3-5 and 8-10 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding dependent claim 3, the closest prior art of record, namely, ALCHERA INC. (KR 102144975) cited in applicant submitted IDS dated 01/17/2024 (Translation KR 102144975B1.pdf), discussed above, does not disclose, teach or suggest, wherein the determining of the image similarity comprises: dividing each of the plurality of frames into a plurality of blocks; calculating a histogram similarity between corresponding blocks between the consecutive frames from among the plurality of blocks; and determining the image similarity based on the histogram similarity, as recited in dependent claim 3.
Claims 4-5 are objected to because they are dependent on objected to claim 3 discussed above.
Regarding dependent claim 8, the closest prior art of record, namely, ALCHERA INC. (KR 102144975) cited in applicant submitted IDS dated 01/17/2024 (Translation KR 102144975B1.pdf), discussed above, does not disclose, teach or suggest, wherein the processor is further configured to divide each of the plurality of frames into a plurality of blocks, calculate a histogram similarity between corresponding blocks between the consecutive frames from among the plurality of blocks, and determine the image similarity based on the histogram similarity, as claimed in dependent claim 8.
Claims 9-10 are objected to because they are dependent on objected to claim 8 discussed above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Lim et al. (US 2019/0035047 A1) teaches a neural network model, determine a similarity metric for an image, through distance comparison of frame signatures of consecutive images (see par 0110), and a controller can use the proposed auto exposure stability signal to exclude the frames that have fluctuating exposure from being included in a burst video (see par 0164).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DOV POPOVICI whose telephone number is (571)272-4083. The examiner can normally be reached Monday - Friday 8:00 am- 4:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi M. Sarpong can be reached at 571-270-3438. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DOV POPOVICI/Primary Examiner, Art Unit 2681