Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 1/16/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-10 and 12-16 are rejected under 35 U.S.C. 102(a)(1) and/or 102(a)(2) as being anticipated by McLean et al. (US 9,361,521).
Regarding claims 1 and 16, McLean teaches an information processing apparatus/system including an electronic device (Figs. 2A-3 teaches different configurations including a user device with a display for displaying highlights) and an information processing apparatus (Figs. 2A-3 teaches many aspects of where the processing take place, either within the device in a processing area or separate external devices and/or servers that performs the processing) capable of communicating with an electronic device, the information processing apparatus comprising:
one or more memories (Fig. 3, paragraphs 131 and 132 teaches processors and memories for storing instructions executed by the processors); and
one or more processors in communication with the one or more memories (Fig. 3, paragraphs 131 and 132 teaches processors and memories for storing instructions executed by the processors), wherein the one or more processors and the one or more memories are configured to:
determine a video highlight scene based on a feature of video data included in a
moving image file (Paragraph 188 and Fig. 4D teaches loading video data into the system and determining video events. Paragraph 177 teaches wherein recorded event is processed to determine segments which include video analysis and audio feed as well), and
determine an audio highlight scene based on a feature of audio data included in the moving image file (Paragraph 188 and Fig. 4D teaches loading video data into the system and determining audio events. Paragraph 177 teaches wherein recorded event is processed to determine segments which include video analysis and audio feed as well);
obtain information about a time period of each of the video highlight scene and the audio highlight scene (paragraph 177 teaches start/end times for each of the selected segments);
receive priority information from the electronic device, the priority information indicating whether either the video highlight scene or the audio highlight scene or both are to be prioritized (paragraphs 345 teaches selecting most exciting plays for inclusion in the highlight show. Additionally, paragraph 177 teaches wherein segments are determined based on an excitement level, which in paragraphs 54-58 teaches that the excitement level is used “for a representation of an occurrence” along with its importance. Therefore, the prior art uses the means for selecting which “segments” created by video or audio based analysis are selected, thereby prioritizing, intrinsically, a video segment if the video segment results in a higher ranking, or an audio segment if the audio segment has a higher ranking than other segments. Additionally, paragraphs 54-58 teaches combining several related moment using an offset and/or “string” occurrences/events based on audio/narration, thereby at least showing an instance where both audio and video segments are prioritized);
determine a time period of a highlight moving image based on the priority information and information about the time period of each of the video highlight scene and the audio highlight scene (paragraphs 182 and 345 teaches wherein the highlight show is created and presented based on the video or audio event (as discussed in paragraphs 54-58 and 177 teaches durations being decided to include in the final “highlight show/video”) and based on a selecting the most exciting plays for inclusion); and
provide a notification of the time period of the highlight moving image to the electronic device (Figs. 6A-6E shows where the time periods associated with the decided upon events/segments are notified via the user interface (“electronic device”) to the user).
Regarding claim 2, McLean teaches the claimed wherein, in a case where the time period of the video highlight scene and the time period of the audio highlight scene do not overlap and the received priority information indicates that both the video highlight scene and the audio highlight scene are to be prioritized, the one or
more processors and the one or more memories determine the time period of each of the video highlight scene and the audio highlight scene to be the time period of the highlight moving image (paragraphs 345 teaches selecting most exciting plays for inclusion in the highlight show. Additionally, paragraph 177 teaches wherein segments are determined based on an excitement level, which in paragraphs 54-58 teaches that the excitement level is used “for a representation of an occurrence” along with its importance. Therefore, the prior art uses the means for selecting which “segments” created by video or audio based analysis are selected, thereby prioritizing, intrinsically, a video segment if the video segment results in a higher ranking, or an audio segment if the audio segment has a higher ranking than other segments. Additionally, paragraphs 54-58 teaches combining several related moment using an offset and/or “string” occurrences/events based on audio/narration, thereby at least showing an instance where both audio and video segments are prioritized. Furthermore, since both video segments and audio segments are prioritized when the highlight show includes video and audio based segments that do not overlap in time).
Regarding claim 3, McLean teaches the claimed wherein, in a case where the time period of the video highlight scene and the time period of the audio highlight scene do not overlap and the received priority information indicates that either the video highlight scene or the audio highlight scene is to be prioritized, the one or more processors and the one or more memories determine, to be the time period of a highlight moving image, either one of the time periods of the video highlight scene and the audio highlight scene, whichever is to be prioritized (paragraphs 345 teaches selecting most exciting plays for inclusion in the highlight show. Additionally, paragraph 177 teaches wherein segments are determined based on an excitement level, which in paragraphs 54-58 teaches that the excitement level is used “for a representation of an occurrence” along with its importance. Therefore, the prior art uses the means for selecting which “segments” created by video or audio based analysis are selected, thereby prioritizing, intrinsically, a video segment if the video segment results in a higher ranking, or an audio segment if the audio segment has a higher ranking than other segments. Additionally, paragraphs 54-58 teaches combining several related moment using an offset and/or “string” occurrences/events based on audio/narration, thereby at least showing an instance where both audio and video segments are prioritized. Furthermore, since both video segments and audio segments are prioritized when the highlight show includes video and audio based segments that do not overlap in time).
Regarding claim 4, McLean teaches the claimed wherein, in a case where the time period of the video highlight scene and the time period of the audio highlight scene overlap, the received priority information indicates that both the video highlight scene and the audio highlight scene are to be prioritized, and the time period of the video highlight scene includes the time period of the audio highlight scene, the one or more processors and the one or more memories determine the time period of the video highlight scene to be the time period of the highlight moving image (paragraphs 345 teaches selecting most exciting plays for inclusion in the highlight show. Additionally, paragraph 177 teaches wherein segments are determined based on an excitement level, which in paragraphs 54-58 teaches that the excitement level is used “for a representation of an occurrence” along with its importance. Therefore, the prior art uses the means for selecting which “segments” created by video or audio based analysis are selected, thereby prioritizing, intrinsically, a video segment if the video segment results in a higher ranking, or an audio segment if the audio segment has a higher ranking than other segments. Additionally, paragraphs 54-58 teaches combining several related moment using an offset and/or “string” occurrences/events based on audio/narration, thereby at least showing an instance where both audio and video segments are prioritized. Furthermore, since both video segments and audio segments are prioritized when the highlight show includes a segment that is generated based on the discussion in paragraphs 54-58 wherein the audio and the video segments overlap due to a related segment and/or “string” of occurrences/events).
Regarding claim 5, McLean teaches the claimed wherein in a case where the time period of the video highlight scene and the time period of the audio highlight scene overlap, the received priority information indicates that both the video highlight scene and the audio highlight scene are to be prioritized, and the time period of the video highlight scene includes only a start time of the audio highlight scene, the one or more processors and the one or more memories determine a time period from a start time of the video highlight scene to an end time of the audio highlight scene to be the time period of the highlight moving image (paragraphs 345 teaches selecting most exciting plays for inclusion in the highlight show. Additionally, paragraph 177 teaches wherein segments are determined based on an excitement level, which in paragraphs 54-58 teaches that the excitement level is used “for a representation of an occurrence” along with its importance. Therefore, the prior art uses the means for selecting which “segments” created by video or audio based analysis are selected, thereby prioritizing, intrinsically, a video segment if the video segment results in a higher ranking, or an audio segment if the audio segment has a higher ranking than other segments. Additionally, paragraphs 54-58 teaches combining several related moment and/or “string” occurrences/events based on audio/narration, thereby at least showing an instance where both audio and video segments are prioritized. Furthermore, since both video segments and audio segments are prioritized when the highlight show includes a segment that is generated based on the discussion in paragraphs 54-58 wherein the audio and the video segments overlap due to a related segment using an offset and/or “string” of occurrences/events. paragraph 177 also teaches wherein “end of a sentence” or “when the play began” are used as markers for determining the start time of the highlight segment).
Regarding claim 6, McLean teaches the claimed wherein in a case where the time period of the video highlight scene and the time period of the audio highlight scene overlap, the received priority information indicates that both the video highlight scene and the audio highlight scene are to be prioritized, and the time period of the video highlight scene includes only an end time of the audio highlight scene, the one or more processors and the one or more memories determine a time period from a start time of the audio highlight scene to an end time of the video highlight scene to be the time period of the highlight moving image (paragraphs 345 teaches selecting most exciting plays for inclusion in the highlight show. Additionally, paragraph 177 teaches wherein segments are determined based on an excitement level, which in paragraphs 54-58 teaches that the excitement level is used “for a representation of an occurrence” along with its importance. Therefore, the prior art uses the means for selecting which “segments” created by video or audio based analysis are selected, thereby prioritizing, intrinsically, a video segment if the video segment results in a higher ranking, or an audio segment if the audio segment has a higher ranking than other segments. Additionally, paragraphs 54-58 teaches combining several related moment and/or “string” occurrences/events based on audio/narration, thereby at least showing an instance where both audio and video segments are prioritized. Furthermore, since both video segments and audio segments are prioritized when the highlight show includes a segment that is generated based on the discussion in paragraphs 54-58 wherein the audio and the video segments overlap due to a related segment using an offset and/or “string” of occurrences/events. paragraph 177 also teaches wherein “end of a sentence” or “when the play began” are used as markers for determining the start time of the highlight segment. Therefore, the start time could be related to an video event or an audio event (such a related to an excitement level as recited in paragraph 177). The start and end times of the segment is based on the claimed is thereafter used as the start and end times once selected to be part of the highlight show).
Regarding claim 7, McLean teaches the claimed wherein in a case where the time period of the video highlight scene and the time period of the audio highlight scene overlap, the received priority information indicates that the video highlight scene is to be prioritized, and the time period of the video highlight scene includes the time period of the audio highlight scene, the one or more processors and the one or more memories determine the time period of the video highlight scene to be the time period of the highlight moving image (paragraphs 345 teaches selecting most exciting plays for inclusion in the highlight show. Additionally, paragraph 177 teaches wherein segments are determined based on an excitement level, which in paragraphs 54-58 teaches that the excitement level is used “for a representation of an occurrence” along with its importance. Therefore, the prior art uses the means for selecting which “segments” created by video or audio based analysis are selected, thereby prioritizing, intrinsically, a video segment if the video segment results in a higher ranking, or an audio segment if the audio segment has a higher ranking than other segments. Additionally, paragraphs 54-58 teaches combining several related moment and/or “string” occurrences/events based on audio/narration, thereby at least showing an instance where both audio and video segments are prioritized. Furthermore, since both video segments and audio segments are prioritized when the highlight show includes a segment that is generated based on the discussion in paragraphs 54-58 wherein the audio and the video segments overlap due to a related segment and/or “string” of occurrences/events. paragraph 177 also teaches wherein “end of a sentence” or “when the play began” are used as markers for determining the start time of the highlight segment. The start and end times of the segment is based on the claimed is thereafter used as the start and end times once selected to be part of the highlight show).
Regarding claim 8, McLean teaches the claimed wherein the one or more processors and the one or more memories are further configured to detect a conversation from the audio highlight scene (paragraph 177 audio feed being detected), and
wherein, in a case where the time period of the video highlight scene and the time period of the audio highlight scene overlap, the received priority information indicates that the video highlight scene is to be prioritized, and the time period of the video highlight scene includes only a start time of the audio highlight scene, the one or more processors and the one or more memories determine, to be the time period of the highlight moving image, a time period from a start time of the video highlight scene to time when a conversation detected at an end time of the video highlight scene ends in the audio highlight scene (paragraphs 345 teaches selecting most exciting plays for inclusion in the highlight show. Additionally, paragraph 177 teaches wherein segments are determined based on an excitement level, which in paragraphs 54-58 teaches that the excitement level is used “for a representation of an occurrence” along with its importance. Therefore, the prior art uses the means for selecting which “segments” created by video or audio based analysis are selected, thereby prioritizing, intrinsically, a video segment if the video segment results in a higher ranking, or an audio segment if the audio segment has a higher ranking than other segments. Additionally, paragraphs 54-58 teaches combining several related moment and/or “string” occurrences/events based on audio/narration, thereby at least showing an instance where both audio and video segments are prioritized. Furthermore, since both video segments and audio segments are prioritized when the highlight show includes a segment that is generated based on the discussion in paragraphs 54-58 wherein the audio and the video segments overlap due to a related segment and/or “string” of occurrences/events. paragraph 177 also teaches wherein “end of a sentence” or “when the play began” are used as markers for determining the start time of the highlight segment. The start and end times of the segment is based on the claimed is thereafter used as the start and end times once selected to be part of the highlight show).
Regarding claim 9, McLean teaches the claimed wherein the one or more processors and the one or more memories are further configured to detect a conversation from the audio highlight scene, and
wherein, in a case where the time period of the video highlight scene and the time period of the audio highlight scene overlap, the received priority information indicates that the video highlight scene is to be prioritized, and the time period of the video highlight scene includes only an end time of the audio highlight scene, the one or more processors and the one or more memories determine, to be the time period of the highlight moving image, a time period from time when a conversation detected at a start time of the video highlight scene starts in the audio highlight scene to an end time of the video highlight scene (paragraphs 345 teaches selecting most exciting plays for inclusion in the highlight show. Additionally, paragraph 177 teaches wherein segments are determined based on an excitement level, which in paragraphs 54-58 teaches that the excitement level is used “for a representation of an occurrence” along with its importance. Therefore, the prior art uses the means for selecting which “segments” created by video or audio based analysis are selected, thereby prioritizing, intrinsically, a video segment if the video segment results in a higher ranking, or an audio segment if the audio segment has a higher ranking than other segments. Additionally, paragraphs 54-58 teaches combining several related moment and/or “string” occurrences/events based on audio/narration, thereby at least showing an instance where both audio and video segments are prioritized. Furthermore, since both video segments and audio segments are prioritized when the highlight show includes a segment that is generated based on the discussion in paragraphs 54-58 wherein the audio and the video segments overlap due to a related segment and/or “string” of occurrences/events. The start time could be related to an video event or an audio event (such a related to an excitement level as recited in paragraph 177). Paragraph 177 also teaches wherein “end of a sentence” or “when the play began” are used as markers for determining the start time of the highlight segment. The start and end times of the segment is based on the claimed is thereafter used as the start and end times once selected to be part of the highlight show).
Regarding claim 10, McLean teaches the claimed wherein, in a case where the time period of the video highlight scene and the time period of the audio highlight scene overlap, and the received priority information indicates that the audio highlight scene is to be prioritized, the one or more processors and the one or more memories determine the time period of the audio highlight scene to be the time period of the highlight moving image (paragraphs 345 teaches selecting most exciting plays for inclusion in the highlight show. Additionally, paragraph 177 teaches wherein segments are determined based on an excitement level, which in paragraphs 54-58 teaches that the excitement level is used “for a representation of an occurrence” along with its importance. Therefore, the prior art uses the means for selecting which “segments” created by video or audio based analysis are selected, thereby prioritizing, intrinsically, a video segment if the video segment results in a higher ranking, or an audio segment if the audio segment has a higher ranking than other segments. Additionally, paragraphs 54-58 teaches combining several related moment using an offset and/or “string” occurrences/events based on audio/narration, thereby at least showing an instance where both audio and video segments are prioritized).
Regarding claim 12, McLean teaches the claimed wherein the one or more processors and the one or more memories are further configured to receive an instruction on whether to generate a highlight moving image from the electronic device (Fig. 4A, step 400 starts the process for generating the highlight show), and
wherein, in a case where the received instruction indicates that the highlight moving image is to be generated, the one or more processors and the one or more memories extract a portion corresponding to the time period of the highlight moving image from each the video data and the audio data and generate a highlight moving image file (Fig. 4A, ends with a highlight show generated and shown to the user on the display in step 418).
Regarding claim 13, McLean teaches the claimed wherein the one or more processors and the one or more memories are further configured to notify the electronic device of the time period of each of the video highlight scene and the audio highlight scene (Figs. 6A-6E shows where the time periods associated with the decided upon events/segments are notified via the user interface (“electronic device”) to the user).
Method claim 14 and NTCRM claim 15 are rejected for the same reasons above with respect to the system performing the same steps and for already including memories that stores programming instructions to be executed by a processor to perform the steps.
Allowable Subject Matter
Claim 11 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Regarding claim 11, while McLean teaches the claimed as discussed in claim 1 above, fails to specifically teach when a notification is to be provided based solely on whether the time period of the video highlight scene includes the time period of the audio highlight scene and therefore fails to teach “wherein in a case where the time period of the video highlight scene and the time period of the audio highlight scene overlap, the received priority information indicates that the video highlight scene is to be prioritized, and the time period of the video highlight scene includes the time period of the audio highlight scene, the one or more processors and the one or more memories do not provide the notification, and wherein, in a case where the time period of the video highlight scene and the time period of the audio highlight scene overlap, the received priority information indicates that the video highlight scene is to be prioritized, and the time period of the video highlight scene does not include the time period of the audio highlight scene, the one or more processors and the one or more memories provide the notification.”
Furthermore, the underlined claim limitations above appear to fall outside of the abstract idea groupings (as per 2019 Revised Patent Subject Matter Eligibility Guidance (PEG)) including mathematical concepts, mental process and certain methods of organizing human activity. The claimed limitations are stated in such a manner the processes aren’t broad enough (for each of the claims as a whole) for them to fall into one of the three groupings of abstract ideas.
So as indicated by the above statements, the closest prior art as discussed above, either singularly or in combination, fail to anticipate or render the above combination of the discussed features/limitations obvious and additionally, applicant’s arguments have been considered persuasive, in light of the claim limitations as well as the enabling portions of the specification.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Cho et al. (US 2016/0104045) teaches a method for providing combined summaries based on separate audio and video segments on a display.
Han et al. (US 2016/01247328) teaches a system and method for detecting sports video highlights based on voice recognition.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GELEK W TOPGYAL whose telephone number is (571)272-8891. The examiner can normally be reached M-F (9:30-6 PST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GELEK W TOPGYAL/Primary Examiner, Art Unit 2481