DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 2-7 are objected to because of the following informalities: Claims 2-7 recite a preamble which does not correspond to the preamble of claim 1.
Claims 2 recites “The method of automatically matching a song to a video work comprised of a plurality of video frames according to claim 1…to form an adaptation function.”
The Office suggests for claim 2:
--The method according to claim 1 further comprising automatically matching a song to said video work comprised of said plurality of video frames
(g1) analyzing said video frames to identifying at least one object in said video work, and
(g2) using at least said identified video cuts, said identified indicia of motion or speed, said time-varying luminosity value series, said time-varying color value series, and at least one of said at least one identified objects in said video work to form said adaptation function.
Claims 3 recites “The method of automatically matching a song to a video work comprised of a plurality of video frames according to claim 1,…”
The Office suggests for claim 3:
--The method according to claim 1 further comprising automatically matching a song to said video work comprised of said plurality of video frames
Claims 4-7 recite “The method of automatically adapting an audio work to a video work using said adaptation function according to claim 3,…”
The Office suggests:
--The method according to claim 3 further comprising automatically adapting said audio work to said video work using said adaptation function to said video work comprised of said plurality of video frames ,--.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1 and 3-9 are rejected under 35 U.S.C. 103 as being unpatentable over Mateos Sole (US 2021/0241739 and hereafter referred to as “Mateos”) in view of Chen et al (US 2012/0033132 and hereafter referred to as “Chen”).
Regarding Claim 1, Mateos discloses a method of automatically matching an audio work to a video work, wherein said video work is comprised of a plurality of video frames, comprising the steps of:
(a) accessing said video work (Page 4, paragraph 0078);
(b) analyzing said video frames to identify one or more video cuts (Page 5, paragraph 0086, determine based on calculated difference between frames results in very frequent cuts, Page 3, paragraph 0045);
(d) analyzing said video frames to obtain a time-varying luminosity value series (Page 4, paragraph 0084, Page 2, paragraph 0038, lighting may change);
(e) analyzing said video frames to obtain a time-varying color value series (Page 4, paragraph 0078-0083, Page 5, paragraph 0086);
(f) using at least said identified video cuts, said time-varying luminosity value series, and said time-varying color value series to form an adaptation function (Page 4, paragraph 0078-0086, Page 1, paragraph 0021-0025 – function descriptive of narrative model/FNM);
(g) using at least said adaptation function to adapt said audio work to said video work (Page 6, paragraph 0093-0096, Figure 2, Figure 4, Figure 5, Page 1, paragraph 0021-0025, Page 2, paragraph 0027, 0038, Page 3, paragraph 0041, 0054),
(h) performing at least a part of said video work and said adapted audio work together for the user (Page 6, paragraph 0097, 0098).
Mateos does not explicitly disclose (c) analyzing said video frames to identify at least one indicia of motion or speed and g) using said identified indicia of motion or speed to form an adaption function.
Chen discloses (a) accessing said video work (Figure 12, 1202); (c) analyzing said video frames to identify at least one indicia of motion or speed (Figure 8A-8C, Page 5, paragraph 0075-0077, Page 6, paragraph 0092, Figure 10); (f) using at least said identified indicia of motion or speed to form an adaptation function (Figure 8A-8C, Page 5, paragraph 0075-0077, Page 6, paragraph 0092, Figure 10); (g) using at least said adaptation function to adapt said audio work to said video work (Figure 8A-8C, Page 5, paragraph 0075-0077, Page 6, paragraph 0092, Figure 10); and (h) performing at least a part of said video work and said adapted audio work together for the user (Page 6, paragraph 0097-0098). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Mateos to include the missing limitations as taught by Chen in order to correspond to derive the visual rhythm of a video signal to have a more advanced method for finding content with matching rhythms (paragraph 0090) as disclosed by Chen.
Furthermore, in KSR International Co. Teleflex Inc., 82 USPQ2d 1385, 1395 (2007), the Court found that if all the claimed elements are known in the prior art then one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yield predictable results to one of ordinary skill in the art before the effective filing date of the invention.
Regarding Claim 3, Mateos and Chen disclose all the limitations of Claim 1. Mateos discloses automatically matching a song to a video work comprised of a plurality of video frames (Figure 2, Figure 4, Figure 5), wherein said adaptation function of step (h) comprises a plurality of time-varying instructions operating to adapt an energy of said audio work to match a content of said video work (Page 6, paragraph 0094, Page 4, paragraph 0040, 0042).
Regarding Claim 4, Mateos and Chen disclose all the limitations of Claim 3. Mateos discloses automatically adapting an audio work to a video work using said adaptation function (Page 4, paragraph 0078-0086, Page 1, paragraph 0021-0025 – function descriptive of narrative model/FNM), wherein step (f) comprises the steps of: (f1) modifying one or more energy levels of said audio work to match said energy levels of said adaptation function (Page 6, paragraph 0094, Page 4, paragraph 0040, 0042), or (f2) modifying one or more instrumentations of said audio work to match said energy levels of said adaptation function (Page 6, paragraph 0094, Page 4, paragraph 0040, 0042), or (f3) modifying one or more chord progressions of said audio work to match said energy levels of said adaptation function, or (f4) modifying one or more volume levels of said audio work to match said energy levels of said adaptation function (Page 6, paragraph 0094, Page 4, paragraph 0040, 0042), or (f5) modifying one or more volume progressions of said audio work to match said energy levels of said adaptation function (Page 6, paragraph 0094, Page 4, paragraph 0040, 0042).
Regarding Claim 5, Mateos and Chen disclose all the limitations of Claim 3. Mateos discloses automatically adapting an audio work to a video work using said adaptation function (Page 4, paragraph 0078-0086, Page 1, paragraph 0021-0025 – function descriptive of narrative model/FNM), wherein step (f) comprises the steps of: (f1) modifying one or more energy levels of said audio work to match said energy levels of said adaptation function (Page 6, paragraph 0094, Page 4, paragraph 0040, 0042), or (f2) modifying one or more instrumentations of said audio work to match said energy levels of said adaptation function (Page 6, paragraph 0094, Page 4, paragraph 0040, 0042), or (f3) modifying one or more chord progressions of said audio work to match said energy levels of said adaptation function, or (f4) modifying one or more volume progressions of said audio work to match said energy levels of said adaptation function (Page 6, paragraph 0094, Page 4, paragraph 0040, 0042).
Regarding Claim 6, Mateos and Chen disclose all the limitations of Claim 3. Mateos discloses automatically adapting an audio work to a video work using said adaptation function (Page 4, paragraph 0078-0086, Page 1, paragraph 0021-0025 – function descriptive of narrative model/FNM), wherein step (f) comprises the steps of: (f1) modifying one or more energy levels of said audio work to match said energy levels of said adaptation function (Page 6, paragraph 0094, Page 4, paragraph 0040, 0042), or (f2) modifying one or more instrumentations of said audio work to match said energy levels of said adaptation function (Page 6, paragraph 0094, Page 4, paragraph 0040, 0042), or (f3) modifying one or more chord progressions of said audio work to match said energy levels of said adaptation function.
Regarding Claim 7, Mateos and Chen disclose all the limitations of Claim 3. Mateos discloses automatically adapting an audio work to a video work using said adaptation function (Page 4, paragraph 0078-0086, Page 1, paragraph 0021-0025 – function descriptive of narrative model/FNM), wherein step (f) comprises the steps of: (f1) modifying one or more energy levels of said audio work to match said energy levels of said adaptation function (Page 6, paragraph 0094, Page 4, paragraph 0040, 0042), or (f2) modifying one or more instrumentations of said audio work to match said energy levels of said adaptation function (Page 6, paragraph 0094, Page 4, paragraph 0040, 0042).
Regarding Claim 8, Mateos and Chen disclose all the limitations of Claim 1. Mateos discloses wherein step (f) comprises the steps of: (f1) using at least said identified video cuts, said identified indicia of motion or speed, said time-varying luminosity value series, and said time-varying color value series to form a time-varying concept timeline (Page 4, paragraph 0078-0086, Page 1, paragraph 0021-0025 – function descriptive of narrative model/FNM, See also Figures 2-7 , which shows time span of video with the specific differences) and (f2) using said concept timeline to form an adaptation function (Page 4, paragraph 0078-0086, Page 1, paragraph 0021-0025 – function descriptive of narrative model/FNM). Chen discloses (f1) using at least said identified indicia of motion or speed to form a time-varying concept timeline (Figure 8A-8C, Page 5, paragraph 0075-0077, 0082, Page 6, paragraph 0092, Figure 10); (f2) using sing said concept timeline to form an adaptation function (Figure 8A-8C, Page 5, paragraph 0075-0077, 0082 – adaptive threshold is shifted with respect to time, Page 6, paragraph 0092, Figure 10). See motivation above.
Regarding Claim 9, Mateos and Chen disclose all the limitations of Claim 8. Mateos discloses wherein said concept timeline comprises a time-varying plurality of indicia indicating whether said video work is a high energy or low energy at a particular point in time (Page 5, paragraph 0091 – anticlimaxes - low level energy, Page 2, paragraph 0027 – low level of hero is damaged, Page 4, paragraph buildup vs anticlimax – high or low level energy).
Claims 2 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Mateos in view of Chen, as applied to claim 1 above, further in view of Li (US 2023/0015498).
Regarding Claim 2, Mateos and Chen disclose all the limitations of Claim 1. Mateos discloses automatically matching a song to a video work comprised of a plurality of video frames (Figure 2, Figure 4, Figure 5), (g and g2) using at least said adaptation function to adapt said audio work to said video work (Page 6, paragraph 0093-0096, Figure 2, Figure 4, Figure 5, Page 1, paragraph 0021-0025, Page 2, paragraph 0027, 0038, Page 3, paragraph 0041, 0054). Chen discloses automatically matching a song to a video work (paragraph 0092), (g and g/2) using at least said adaptation function to adapt said audio work to said video work (Figure 8A-8C, Page 5, paragraph 0075-0077, Page 6, paragraph 0092, Figure 10). The combination does not teach identifying the one object. Li discloses automatically matching a song to a video work comprised of a plurality of video frames (Page 1, paragraph 0017), wherein step (g) comprises the steps of: (g1) analyzing said video frames to identifying at least one object in said video work (Page 1-2, paragraph 0013, 0014 – actors expression, weather, scene, buildings, creature, character, things, person), and (g2) using at least said identified video cuts, said identified indicia of motion or speed, said time-varying luminosity value series, said time-varying color value series, and at least one of said at least one identified objects in said video work to form an adaptation function (Page 1-2, paragraph 0013-0014, 0018 using movement, color value, lighting, objects, clip cutting or scene switches, Page 2, paragraph 0017 -a classification function to matching audio). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the combination to include the missing limitations as taught by Li in order to enable a composition selection time for video creation and music copyright purchase and authorization time to be reduced (Page 1, paragraph 0003) as disclosed by Li.
Regarding Claim 10, Mateos and Chen disclose all the limitations of Claim 1. Mateos discloses wherein step (f) comprises the steps of: (f1) using at least said identified video cuts, said identified indicia of motion or speed, said time-varying luminosity value series, and said time-varying color value series to form a value (Page 4, paragraph 0078-0086, Page 1, paragraph 0021-0025 – function descriptive of narrative model/FNM, See also Figures 2-7 , which shows time span of video with the specific differences) and (f2) using said values to form an adaptation function (Page 4, paragraph 0078-0086, Page 1, paragraph 0021-0025 – function descriptive of narrative model/FNM). Chen discloses (f1) using at least said identified indicia of motion or speed to form values (Figure 8A-8C, 0102). See motivation above. The combination does not explicitly disclose score chart. Li discloses wherein step (f) comprises the steps of: (f1) using at least said identified video cuts, said identified indicia of motion or speed, said time-varying luminosity value series, and said time-varying color value series to form a score chart (Page 1-2, paragraph 0013-0014, 0018 using movement, color value, lighting, objects, clip cutting or scene switches, Page 2, paragraph 0017, 0022 – scoring mode), and (f2) using said score chart to form an adaptation function (Page 1-2, paragraph 0013-0014, 0018 using movement, color value, lighting, objects, clip cutting or scene switches, Page 2, paragraph 0017, 0022 – scoring mode). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the combination to include the missing limitations as taught by Li in order to enable a composition selection time for video creation and music copyright purchase and authorization time to be reduced (Page 1, paragraph 0003) as disclosed by Li.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FARZANA HOSSAIN whose telephone number is (571)272-5943. The examiner can normally be reached 9:00 am to 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Kelley can be reached at 571-272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FARZANA HOSSAIN/Primary Examiner, Art Unit 2482
December 9, 2025