DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 5, 7, 10-16, 18, 19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Master et al. (US Pub. No. 2016/0212501).
Consider claim 1. Master et al. discloses a data processing method, comprising: determining a trigger point sequence according to a first multimedia file, wherein the trigger point sequence comprises at least one control trigger point (claim 1 describes generating, based on program information about essence of a media program, an essence-and-version identifier for the essence of the media program; based on the essence of the media program, a reference clock and the essence-and-version identifier, generating time-based metadata comprising a grid of time points over the essence of the media program); determining a control instruction corresponding to each of control trigger points (fig. 3D shows control info for companion content), wherein the control instruction is used for cooperating with the first multimedia file to control a controlled device (fig. 3D shows a soundbar); and generating a companion program (fig. 3D shows creator, grid info, and companion content) according to the trigger point sequence and the control instruction (claim 17 describes a user control to a user with the media device at the specific time point of the essence of the media program) corresponding to each of the control trigger points (claim 1 describes correlating companion content with the grid of time points over the essence of the media program).
Consider claim 5. Master et al. discloses the data processing method according to claim 1, wherein after generating a companion program according to the trigger point sequence and the control instruction corresponding to each of the control trigger points, the data processing method further comprises: in response to a modification instruction of a parameter interface, adjusting, according to the modification instruction, time sequence of each of the control trigger points and/or the control instruction corresponding to each of the control trigger points in the companion program (para. 0045 describes fetching companion content based on time-based metadata).
Consider claim 7. Master et al. discloses the data processing method according to claim 1, wherein after generating a companion program according to the trigger point sequence and the control instruction corresponding to each of the control trigger points, the data processing method further comprises: performing clock synchronization between a multimedia playback terminal playing a multimedia file and the controlled device; and adjusting time sequence of the companion program according to a result of the clock synchronization (para. 0044 describes a reference clock, or a reference timeline representing a reference clock, as described herein can be generated in a wide variety of ways. In some embodiments, the reference clock, or the reference timeline, can be generated by the essence encoder and grid generator. In some other embodiments, a reference clock, or a reference timeline representing the reference clock, can be sourced from a clock source other than the essence encoder and grid generator).
Consider claim 12. Master et al. discloses an electronic device, comprising: one or more processors; a memory, storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the data processing method according to claim 1; and one or more input and output (I/O) interfaces, connected between the one or more processors and the memory, and configured to implement information exchange between the one or more processors and the memory (paras. 0181-0186 describe a plurality of processors, a memory, storing programs which, executed by the one or more processors, cause the one or more processors to implement the data processing method, and input and output (I/O) interfaces, connected between the one or more processors and the memory, and configured to implement information exchange between the one or more processors and the memory).
Consider claim 14. Master et al. discloses the data processing method according to claim 1, wherein the first multimedia file comprises at least one of : an audio file, a video file, an augmented reality (AR) file or a virtual reality (VR) file (para. 0042 describes companion content including audio and video).
Consider claim 15. Master et al. discloses the data processing method according to claim 1, wherein a controlled device refers to a peripheral device (para. 0185 describes user input devices).
Consider claim 16. Master et al. discloses the data processing method according to claim 1, wherein the controlled device refers to a device that can cooperate with a playback of the first multimedia file to provide a user with sensory stimuli or interactive responses (paras. 0031-0033 describe providing interactive experiences to support delivery solutions).
Consider claim 18. Master et al. discloses the data processing method according to claim 1, wherein the companion program is used as an add-on program for the first multimedia file, and is used for controlling the controlled device in cooperation with a playback of the first multimedia file (para. 0036 describes the companion program is used as an add-on program for the first multimedia file, and is used for controlling the controlled device in cooperation with a playback of the first multimedia file).
Consider claim 19. Master et al. discloses the data processing method according to claim 1, wherein determining a trigger point sequence according to a first multimedia file comprises: determining the trigger point sequence on the basis of time line of the first multimedia file in accordance with the content of the first multimedia file (para. 0044 describes a reference clock, or a reference timeline representing a reference clock, as described herein can be generated in a wide variety of ways. In some embodiments, the reference clock, or the reference timeline, can be generated by the essence encoder and grid generator. In some other embodiments, a reference clock, or a reference timeline representing the reference clock, can be sourced from a clock source other than the essence encoder and grid generator).
Claims 10, 11, and 13 are rejected using similar reasoning as corresponding claim 12 above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 8 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Master et al. (US Pub. No. 2016/0212501) in view of Hisense (CN 113542891).
Consider claim 8. Master et al. discloses all claimed limitations as stated above, except wherein determining a trigger point sequence according to a first multimedia file comprises: determining the at least one control trigger point according to a value of a sensor of a multimedia playback terminal playing a multimedia file.
However, Hisense teaches wherein determining a trigger point sequence according to a first multimedia file comprises: determining the at least one control trigger point according to a value of a sensor of a multimedia playback terminal playing a multimedia file (para. 0137 describes a first terminal playing a target video selected by a wearer, and detects whether a second terminal is connected, then sending a control instruction to the second terminal to enable the second terminal to acquire and play the special effect information corresponding to the preset trigger point).
Therefore, it would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention, wherein determining a trigger point sequence according to a first multimedia file comprises: determining the at least one control trigger point according to a value of a sensor of a multimedia playback terminal playing a multimedia file, in order to improve the performance of AR display special effects as suggested by the prior art.
Consider claim 9. Master et al. discloses all claimed limitations as stated above, except wherein determining the at least one control trigger point according to a value of a sensor of a multimedia playback terminal playing a multimedia file comprises: for any sensor, configuring one control trigger point corresponding to each threshold of the sensor's value, wherein the sensor is provided with at least one threshold.
However, Hisense teaches wherein determining the at least one control trigger point according to a value of a sensor of a multimedia playback terminal playing a multimedia file comprises: for any sensor, configuring one control trigger point corresponding to each threshold of the sensor's value, wherein the sensor is provided with at least one threshold (para. 0137 describes a first terminal playing a target video selected by a wearer, and detects whether a second terminal is connected, then sending a control instruction to the second terminal to enable the second terminal to acquire and play the special effect information corresponding to the preset trigger point).
Therefore, it would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention, wherein determining the at least one control trigger point according to a value of a sensor of a multimedia playback terminal playing a multimedia file comprises: for any sensor, configuring one control trigger point corresponding to each threshold of the sensor's value, wherein the sensor is provided with at least one threshold, in order to improve the performance of AR display special effects as suggested by the prior art.
Claims 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Master et al. (US Pub. No. 2016/0212501) in view of Yang et al. (US Pub. No. 2020/0380263).
Consider claim 17. Master et al. discloses all claimed limitations as stated above, except wherein the feature point comprises a key frame or a key point of the first multimedia file.
However, Yang et al. teaches wherein the feature point comprises a key frame or a key point of the first multimedia file (abstract describes a feature extractor to extract feature descriptors to determine key frames).
Therefore, it would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention, wherein the feature point comprises a key frame or a key point of the first multimedia file, in order detect key image frames in a video as suggested by the prior art.
Consider claim 20. Master et al. discloses all claimed limitations as stated above, except performing an analysis on the first multimedia file using multimedia analysis software or artificial intelligence to extract key frames or key points.
However, Yang et al. teaches performing an analysis on the first multimedia file using multimedia analysis software or artificial intelligence to extract key frames or key points (abstract describes a performing an analysis on the first multimedia file using artificial intelligence to extract key frames).
Therefore, it would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention, to perform an analysis on the first multimedia file using multimedia analysis software or artificial intelligence to extract key frames or key points, in order detect key image frames in a video as suggested by the prior art.
Allowable Subject Matter
Claims 2-4 and 6 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Mishawn N Hunter whose telephone number is (571)272-7635. The examiner can normally be reached Monday-Friday 7am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thai Tran can be reached at 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MISHAWN N. HUNTER/Primary Examiner, Art Unit 2484