DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “an acquisition module configured to”, “a first determination module configured to”, “a second determination module configured to”, and “a generation module configured to” in claim 13.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim16 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claim recites a computer program product comprising a computer program without a medium.
As the courts' definitions of machines, manufactures and compositions of matter indicate, a product must have a physical or tangible form in order to fall within one of these statutory categories. Digitech, 758 F.3d at 1348, 111 USPQ2d at 1719. Thus, the Federal Circuit has held that a product claim to an intangible collection of information, even if created by human effort, does not fall within any statutory category. Digitech, 758 F.3d at 1350, 111 USPQ2d at 1720 (claimed "device profile" comprising two sets of data did not meet any of the categories because it was neither a process nor a tangible product). Similarly, software expressed as code or a set of instructions detached from any medium is an idea without physical embodiment. See Microsoft Corp. v. AT&T Corp., 550 U.S. 437, 449, 82 USPQ2d 1400, 1407 (2007); see also Benson, 409 U.S. 67, 175 USPQ2d 675 (An "idea" is not patent eligible). Thus, a product claim to a software program that does not also contain at least one structural limitation (such as a "means plus function" limitation) has no physical or tangible form, and thus does not fall within any statutory category. Another example of an intangible product that does not fall within a statutory category is a paradigm or business model for a marketing company. In re Ferguson, 558 F.3d 1359, 1364, 90 USPQ2d 1035, 1039-40 (Fed. Cir. 2009).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 13-16 are rejected under 35 U.S.C. 102(a)(1) as being disclosed by Kim et al US 2020/0351450(hereinafter Kim).
Regarding claim1,Kim discloses a video generation method, comprising: acquiring a plurality of images and music matched with the plurality of images(see fig. 2, fig. 3 background music 310 and video 315, [0075]); determining first feature information of the plurality of images and second feature information of the music(fig. 2, [0075], fig. 3 extract first feature from the background music in oration 320 and second feature from the video 315 in operation 325); determining a target rendering effect combination according to the first feature information, the second feature information and a pre-stored plurality of rendering effects, wherein the rendering effects are animation, special effect or transition(fig. 2, [0076-0077], fig. 3 in operation 330 determine special effect based on the combination of the first feature and second feature); and generating a video according to the plurality of images, the music and the target rendering effect combination(fig. 2, [0049], applying the special effect to the video based on the features ).
Regarding claim13, Kim discloses a video generation apparatus, comprising: an acquisition module configured to acquire a plurality of images and music matched with the plurality of images(see fig. 2, fig. 3 background music 310 and video 315, [0075]); a first determination module configured to determine first feature information of the plurality of images and second feature information of the music(fig. 2, [0075], fig. 3 extract first feature from the background music in oration 320 and second feature from the video 315 in operation 325); a second determination module configured to determine a target rendering effect combination according to the first feature information, the second feature information and a pre-stored plurality of rendering effects(fig. 2, [0076-0077], fig. 3 in operation 330 determine special effect based on the combination of the first feature and second feature); the rendering effects being animation, special effect or transition(fig. 2, [0076-0077], fig. 3 in operation 330 determine special effect based on the combination of the first feature and second feature); and a generation module configured to generate a video according to the plurality of images, the music and the target rendering effect combination(fig. 2, [0049], applying the special effect to the video based on the features ).
Regarding claim14 Kim discloses an electronic device, comprising: a processor, and a memory communicatively connected to the processor; the memory storing computer-executable instructions([0111-0112], methods according to the above-described example embodiments may be recorded in non-transitory computer-readable media including program instructions to implement various operations of the above-described example embodiments); the processor executing the computer-executable instructions stored in the memory to implement the method according to claim 1([0111-0112], methods according to the above-described example embodiments may be recorded in non-transitory computer-readable media including program instructions to implement various operations of the above-described example embodiments).
Regarding claim15, Kim discloses a non-transitory computer-readable storage medium, having computer-executable instructions stored thereon, which, in response to being executed by a processor, implement the method([0110-0112], methods according to the above-described example embodiments may be recorded in non-transitory computer-readable media including program instructions to implement various operations of the above-described example embodiments), comprising: acquiring a plurality of images and music matched with the plurality of images(see fig. 2, fig. 3 background music 310 and video 315, [0075]); determining first feature information of the plurality of images and second feature information of the music(fig. 2, [0075], fig. 3 extract first feature from the background music in oration 320 and second feature from the video 315 in operation 325); determining a target rendering effect combination according to the first feature information, the second feature information and a pre-stored plurality of rendering effects, wherein the rendering effects are animation, special effect or transition(fig. 2, [0076-0077], fig. 3 in operation 330 determine special effect based on the combination of the first feature and second feature); and generating a video according to the plurality of images, the music and the target rendering effect combination(fig. 2, [0049], applying the special effect to the video based on the features ).
Claim16 is rejected for similar reason as discussed in claim15 above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim10 is rejected under 35 U.S.C. 103 as being unpatentable over Kim as applied to claims 1, 13-16 above, and further in view of Ohishi et al US 2022/0319493(hereinafter Ohishi).
Regarding claim10, Kim teaches all the limitations of claim1 but does not teach and Ohishi teaches the determining first feature information of the plurality of images and second feature information of the music, comprises: performing feature extraction on the plurality of images through a pre-stored image feature extraction model, to obtain the first feature information of the plurality of images; and performing feature extraction on the music through a pre-stored music feature extraction model, to obtain the second feature information([0028], [0069], The image encoder is a model that receives an image and outputs an image feature. The audio encoder is a model that receives a speech in a predetermined language and outputs an audio feature).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to use models to extract image/audio feature as in Ohishi in order to associate the audio and video accurately.
Claim11 is rejected under 35 U.S.C. 103 as being unpatentable over Kim as applied to claims 1, 13-16 above, and further in view of Murray US 2009/0307207.
Regarding claim11,Kim teaches all the limitations of claim1 above but does not teach and Murray teaches the target rendering effect combination comprises the animation, special effect and transition corresponding to each of the plurality of images([0021], [0078], animation, [0083], special effect transition such as fading or dissolving images); the generating a video according to the plurality of images, the music and the target rendering effect combination, comprises: sequentially displaying the plurality of images according to the animation, special effect and transition corresponding to each of the plurality of images in the target rendering effect combination, and playing the music, to generate the video([0078], [0083], video editor creates a multimedia presentation and apply a special effect, transition and animation ).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to apply special effect, transition and animation on to a multimedia as in Murray in order to make the video visually pleasing to the viewer.
Claim12 is rejected under 35 U.S.C. 103 as being unpatentable over Kim as applied to claims 1, 13-16 above, and further in view of Zheng et al US 2024/0296870(hereinafter Zheng).
Regarding claim12,Kim teaches all the limitations of claim1 above but does not teach and Zheng teaches in response to a selection operation on a plurality of target images in a plurality of candidate images, determining the plurality of target images as the plurality of images([0038], a determination unit configured to determine, in response to a clicking operation on a button in the video editing interface, a target audio and a target image for video synthesis); and in response to a selection operation on target music in a plurality of candidate music, determining the target music as the music matched with the plurality of images([0038], a determination unit configured to determine, in response to a clicking operation on a button in the video editing interface, a target audio and a target image for video synthesis).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to select audio and image for video synthesis as in Zheng in order to improve the efficiency of generating video file ([0087]).
Allowable Subject Matter
Claims2-9, 18-21 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GIRUMSEW WENDMAGEGN whose telephone number is (571)270-1118. The examiner can normally be reached 9:00-7:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thai Tran can be reached at (571) 272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
GIRUMSEW WENDMAGEGN
Primary Examiner
Art Unit 2484
/GIRUMSEW WENDMAGEGN/Primary Examiner, Art Unit 2484