DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/22/25 has been entered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-12 and 15-18 are rejected under 35 U.S.C. 103 as being unpatentable over Chang et al. (US 2020/0193163 A1) in view of Wei et al. (US 2022/0021950).
Regarding claim 1, Chang discloses an information processing apparatus (see 200 in fig. 2), comprising; a control unit (e.g. see ”DataFx” in ¶ [0105]) configured to: determine a first analysis engine (104-114 in fig. 1) from a plurality of analysis engines (see various scenes associated with 102-114 in fig. 1), wherein the first analysis engine is configured to detect, based on scene detection information (see 202 in fig. 2), a scene in an input video (see various scenes in 104-114 in fig. 1; e.g. see “events” in ¶ [0105]).
Although Chang discloses to determine, based on scene-related information (e.g. see scene-related information in 104-114 of fig. 1) associated with the detected scene, a second analysis engine (e.g. see ”DataFx” in ¶ [0105]) from the plurality of analysis engines, wherein the second analysis engine is configured to generate first result information associated with the detected scene (e.g. see “metrics” of ”DataFx” in ¶ [0105]), it is noted that Chang does not provide the particular wherein the generated first result information includes: scene start information associated with a start of the detected scene, and scene end information associated with an end of the detected scene, wherein the start of the detected scene is different from the end of the detected scene.
However, Wei discloses an scene detection system wherein the generated first result information includes: scene start information associated with a start of the detected scene, and scene end information associated with an end of the detected scene, wherein the start of the detected scene is different from the end of the detected scene (see 301 in fig. 5).
Given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Wei foundational teachings of generating scene time information into Chang scene detection for the benefit of providing essential information for retrieving a scene from a video for such as for video on demand.
Regarding claim 2, the references further discloses wherein the first result information includes time information associated with the detected scene (see Wei 301 in fig. 5).
Regarding claim 3, the references further discloses wherein the second analysis engine is further configured to: identifying the scene start information; and identifying the scene end information (see Wei 301 in fig. 5).
Regarding claim 4, Chang further discloses wherein the control unit is further configured to determine a third analysis engine (e.g. see 104 from plurality 102-118 in fig. 1) from the plurality of analysis engines, the scene-related information includes scene-type information (see INTERACTION in 104 of fig. 1), and the third analysis engine is configured to generate the first result information based on the scene-type information (e.g. see SHOT MATRIX in 104 of fig. 1).
Regarding claim 5, Chang further discloses wherein the control unit is further configured to determine a fourth analysis engine from the plurality of analysis engines (see any of 104-118 in fig. 1; see multiple UI samples in figs. 4-14 and 25-31), the fourth analysis engine is configured to identify the scene start information based on the type of detected scene (see 104-118 in fig. 1; see multiple UI time period samples in figs. 4-14 and 25-31); and identify the scene end information based on the type of the detected scene (see 104-118 in fig. 1; see multiple UI samples in figs. 4-14 and 25-31).
Regarding claim 6, the references further discloses wherein the generated first result information further includes time information associated with the scene (see Chang 104-118 in fig. 1; see multiple UI samples in figs. 4-14 and 25-31), the control unit is further configured to determine a fifth analysis engine from the plurality of analysis engine, and the fifth analysis engine is configured to analyze a section to generate second result information (see Chang 104-118 in fig. 1; see multiple UI samples in figs. 4-14 and 25-31), and the section is in the input video with the time information (see Wei 301 in fig. 5).
Regarding claim 7, Chang further discloses wherein the determination of the fifth analysis engine is based on scene-type information associated with the detected scene (see 104-118 in fig. 1; see multiple UI samples in figs. 4-14 and 25-31).
Regarding claim 8, Chang further discloses wherein the control unit is further configured to set the scene-related information based on a setting of the scene detection information (e.g. see ¶ [0225]).
Regarding claim 9, Chang further discloses wherein the control unit if further configured to set the scene detection information based on input of a scene type (see 104-118 in fig. 1; see multiple UI samples in figs. 4-14 and 25-31).
Regarding claim 10, Chang further discloses wherein the control unit is further configured to set the scene detection information based on input of a sport type (see 104-118 in fig. 1; see multiple UI samples in figs. 4-14 and 25-31).
Regarding claim 11, Chang further discloses wherein the control unit is further configured to: generate metadata based on the detected scene and the generated first result information (see 104-118 in fig. 1; see multiple UI samples in figs. 4-14 and 25-31); and link the generated metadata to the input video (see 104-118 in fig. 1; see multiple UI samples in figs. 4-14 and 25-31).
Regarding claim 12, the references further discloses wherein the control unit is further configured to: generate metadata based on the detected scene, the generated first result information, and the generated second result information; and link the generated metadata to the input video (see Chang 104-118 in fig. 1; see multiple UI samples in figs. 4-14 and 25-31, and see Wei 301 in fig. 5).
Regarding claim 15, Chang further discloses wherein the control unit is configured to: generate image information corresponding to the detected scene, and combine the generated image information with the input video (see 104-118 in fig. 1; see multiple UI samples in figs. 4-14 and 25-31).
Regarding claim 16, the references further discloses wherein the generated first result information further includes time information associated with the detected scene (see Chang 104-118 in fig. 1; see multiple UI samples in figs. 4-14 and 25-31), and the control unit is further configured to superimpose the generated image information on the input video based on the time information associated with the detected scene (see Chang 104-118 in fig. 1; see multiple UI samples in figs. 4-14 and 25-31, and see Wei 301 in fig. 5).
Regarding claim 17, the claim(s) recite analogous limitations to claim 1, and is/are therefore rejected on the same premise.
Regarding claim 18, the claim(s) recite analogous limitations to claim 1, and is/are therefore rejected on the same premise.
Claims 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Chang and Wei in view of Smith et al. (US 2020/0020356).
Regarding claim 13, the references do not disclose wherein the control unit is further configured to: compare first time information with second time information, wherein the first time information is associated with the detected scene, and the second time information is from external data; and overwrites based on the comparison, the second time information with the first time information.
However, Smith discloses an information processing apparatus wherein the control unit is further configured to: compare first time information with second time information, wherein the first time information is associated with the detected scene, and the second time information is from external data (e.g. see ¶ [0120]); and overwrites based on the comparison, the second time information with the first time information (e.g. see ¶ [0120]).
Given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Smith teachings of time synchronization into the Wei scene display for the benefit of increasing accuracy in display, visualization, and interaction with outputs from such methods and systems.
Regarding claim 14, the references do not disclose wherein the control unit is further configured to: obtain first accompanying information as the second result; compare the obtained first accompanying information with second accompanying information, wherein the second accompanying information is from external; and overwrite, based on the comparison, the second accompanying information with the first accompanying information
However, Smith discloses an information processing apparatus wherein the control unit is further configured to: obtain first accompanying information as the second result (e.g. see ¶ [0120]); compare the obtained first accompanying information with second accompanying information (e.g. see ¶ [0120]), wherein the second accompanying information is from external (e.g. see ¶ [0120]); and overwrite, based on the comparison, the second accompanying information with the first accompanying information (e.g. see ¶ [0120]).
Given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Smith teachings of time synchronization into the Wei scene display for the benefit of increasing accuracy in display, visualization, and interaction with outputs from such methods and systems.
Response to Arguments
Applicant's arguments with respect to the amended claims have been considered but are moot in view of the new ground(s) of rejection.
Citation of Pertinent Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Peng (US 2021/0289186), discloses scenes processing for real time display.
Teppler (US 2005/0160272), discloses providing trusted time content.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICHARD T TORRENTE whose telephone number is (571)270-3702. The examiner can normally be reached M-F: 6:45-3:15 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at (571) 272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RICHARD T TORRENTE/Primary Examiner, Art Unit 2485