Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-2, 4-11, and 14-20 have been considered but are moot in view of new grounds of rejection.
Examiner has brought in Schleicher (10335690) to disclose the newly amended features.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 4, 6-11, 14, and 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang (2020/0143447) in view of Bernstein (10904632) and Schleicher (10335690).
As for claim 1, Wang discloses an information processing method of a terminal configured to display a video distributed from a server, the method comprising:
receiving, by at least one processor of the terminal, the video distributed from the server (User device receives a live show to watch by the viewer; [0021], [0026]);
displaying on a display section of the terminal the video that is distributed ([0041]);
acquiring, by the at least one processor of the terminal, first information (virtual gift) based on a first input performed by a user of the terminal on the display section displaying the video (While the viewer is watching a live video, the viewer sends the performer a virtual gift to show love and support to the performer; [0021], [0026], [0039]);
However, Wang fails to disclose:
acquiring, by the at least one processor of the terminal, second information related to the video based on the first information;
displaying the second information on the display section; and
playing, by the at least one processor, a control of playing on the display section a part corresponding to the first input in the video, based on a second input for the second information performed by the user of the terminal;
the second information including information representing a first play position corresponding to the first input of the user of the terminal;
creating a digest video corresponding to the first input of the user of the terminal;
wherein the creating the digest video comprises:
setting a start point of extraction of a section which is a second part of the video, based on the second information, and
setting an end point of extraction of the section at a time point later than the start point of extraction based on at least one of an amount of comments of the user to a distributor of the video distributed from the server, a voice of the distributor, or a facial expression of the distributor.
In an analogous art, Bernstein discloses:
acquiring, by the at least one processor, second information (engagement indication at specific time) related to the video based on the first information (Referring to Fig. 3B, viewers send engagement representations which represent signals of appreciations from the viewers. The server provides an engagement indication with the real-time video stream so that other viewers can see the engagement representations; col. 18, line 26-col 19, line 5; Associating the indication with a time enables the displaying device to determine how long to display a representation of the engagement and, during replay, when to begin displaying the representation of the engagement; col. 25, lines 25-50);
displaying the second information (350 & 355 – fig. 3B) on the display section (col. 18, line 26-col 19, line 5); and
playing, by the at least one processor, on the display section a first part corresponding to the first input in the video, based on a second input for the second information performed by the user of the terminal (col. 18, line 26-col 19, line 5, col. 25, lines 25-50).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang’s invention to include the abovementioned limitation, as taught by Bernstein, for the advantage of displaying the engagement indications at the corresponding time.
However, Wang and Bernstein fails to disclose:
the second information including information representing a first play position corresponding to the first input of the user of the terminal;
creating a digest video corresponding to the first input of the user of the terminal;
wherein the creating the digest video comprises:
setting a start point of extraction of a section which is a second part of the video, based on the second information, and
setting an end point of extraction of the section at a time point later than the start point of extraction based on at least one of an amount of comments of the user to a distributor of the video distributed from the server, a voice of the distributor, or a facial expression of the distributor.
In an analogous art, Schleicher discloses:
the second information (player excitement level) including information representing a first play position corresponding to the first input (Various input devices such as voice recognition device and gesture recognition device are used to determine player excitement level) of the user of the terminal; col. 5, lines 5-22);
creating a digest video (highlight reel) corresponding to the first input of the user of the terminal (Highlight reel is automatically created based on player excitement level. Various input devices such as voice recognition device and gesture recognition device are used to determine player excitement level. The player excitement level is compared to an event threshold. If the player excitement threshold exceeds an event threshold, the gaming platform marks a highlight start point when the player excitement value exceeds an event threshold. A highlight end point is marked when the player excitement value is less than an audience interaction threshold. Events are marked for inclusion in a highlight reel based on player excitement. Col. 9, lines 5-57, col. 10, lines 13-40);
wherein the creating the digest video comprises:
setting a start point of extraction of a section which is a second part of the video, based on the second information (Col. 9, lines 5-57, col. 10, lines 13-40), and
setting an end point of extraction of the section at a time point later than the start point of extraction based on at least one of a voice of the distributor (voice recognition device) or a facial expression (gesture recognition device or biometric input device) of the distributor (Col. 9, lines 5-57, col. 10, lines 13-40).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang and Bernstein’s invention to include the abovementioned limitation, as taught by Schleicher, for the advantage of automating the process of the selecting the most interesting segments for a highlight reel.
As for claim 2, Wang discloses the information processing method according to claim 1, wherein the first information includes gift information of the user for the video (While the viewer is watching a live video, the viewer sends the performer a virtual gift to show love and support to the performer; [0021], [0026], [0039]).
As for claim 4, Bernstein discloses wherein the information representing the first play position includes information related to a time (col. 18, line 26-col 19, line 5, col. 25, lines 25-50).
As for claim 6, the modified Wang discloses:
wherein a plurality of first inputs including the first input are performed by the user of the terminal (Wang: While viewers watch a live video, viewers send the performer virtual gifts to show love and support to the performer; [0021], [0026], [0039]); and
wherein the method further comprises playing, by the at least one processor, on the display section each part corresponding to each of the plurality of first inputs in the video, based on the second input for the second information by the user of the terminal (Bernstein: Referring to Fig. 3B, viewers send engagement representations which represent signals of appreciations from the viewers. The server provides an engagement indication with the real-time video stream so that other viewers can see the engagement representations; col. 18, line 26-col 19, line 5; Associating the indication with a time enables the displaying device to determine how long to display a representation of the engagement and, during replay, when to begin displaying the representation of the engagement; col. 25, lines 25-50).
As for claim 7, Bernstein discloses wherein the second information includes data related to a reaction of the user to the video (Fig. 3B illustrates hearts from viewers.).
As for claim 8, Bernstein discloses:
acquiring, by the at least one processor, third information (engagement indication from another viewer) including information representing a second play position of the video, the third information corresponding to a third input on a first display section of the first terminal displaying the video by the first user of the first terminal different from the terminal (Referring to Fig. 3B, viewers send engagement representations which represent signals of appreciations from the viewers. The server provides an engagement indication with the real-time video stream so that other viewers can see the engagement representations; col. 18, line 26-col 19, line 5; Associating the indication with a time enables the displaying device to determine how long to display a representation of the engagement and, during replay, when to begin displaying the representation of the engagement; col. 25, lines 25-50); and
displaying the third information (350 & 355 – fig. 3B) on the display section of the terminal (col. 18, line 26-col 19, line 5).
As for claim 9, Schleicher discloses wherein the second information (player excitement level) includes information about a first video (main video) obtained by extracting at least a part of the video, based on the first input (Various input devices such as voice recognition device and gesture recognition device are used to determine player excitement level) of the user of the terminal; col. 5, lines 5-22), and wherein the method further comprises playing, by the at least one processor, the first video on the display section, based on the second input for the second information by the user of the terminal (col. 5, lines 5-22).
As for claim 10, Schleicher discloses wherein a plurality of first inputs including the first input are performed by the user of the terminal, and wherein the first video is formed based on each part corresponding to each of the plurality of first inputs in the video (Highlight reel is formed based on the first inputs see above rejection of claim 1; Col. 9, lines 5-57, col. 10, lines 13-40).
As for claim 11, Schleicher discloses further comprising switching, by the at least one processor, the video played on the display section from the first video to the video distributed from the server based on an input of the user of the terminal (Col. 9, lines 5-57, col. 10, lines 13-40).
As for claim 14, Schleicher discloses wherein the first video is set by thinning out a predetermined part between the start point of the first video and the end point of the first video (Col. 9, lines 5-57, col. 10, lines 13-40).
As for claim 16, Bernstein discloses wherein the second information includes information (types of engagement representations) related to a content of the first video (col. 18, line 26-col 19, line 5).
As for claim 17, Bernstein discloses wherein the video includes image information based on a fourth input performed on a first display section of a first terminal displaying the video by the first user of the first terminal different from the user of the terminal, and wherein in the first video, the image information is removed (Each icon may appear on the display for 3 or 5 seconds and then disappears; col. 8, lines 12-25).
As for claim 18, the modified Wang discloses wherein the second information includes information related to a content of the first input (see rejection of claim 1).
Claims 19 and 20 contain the limitations of claim 1 and are analyzed as previously discussed with respect to that claim.
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang, Bernstein, and Schleicher, as applied to claim 3 above, and further in view of Al Majid (2021/0099406).
As for claim 5, Wang, Bernstein, and Schleicher fail to disclose wherein the information representing the first play position includes information representing a play position on a seek bar of the video.
In ana analogous art, Al Majid discloses wherein the information representing the first play position includes information representing a video play position on a seek bar of the video ([0045], [0046], [0113], [0114], [0148]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang, Bernstein, and Schleicher’s invention to include the abovementioned limitation, as taught by Al Majid, for the advantage of improving the efficiency of using the electronic device by reducing the number of screens and interfaces a user has to navigate through to find content to consume.
Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang, Bernstein, and Schleicher as applied to claim 9 above, and further in view of Swaminathan (2019/0377955).
As for claim 15, Wang, Bernstein, and Schleicher fail to disclose wherein the second information includes information about a thumbnail corresponding to the first video.
In an analogous art, Swaminathan discloses wherein the second information includes information about a thumbnail corresponding to the first video ([0031]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang, Bernstein, and Schleicher’s invention to include the abovementioned limitation, as taught by Swaminathan, for the advantage of selecting a relevant thumbnail.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SUMAIYA A CHOWDHURY whose telephone number is (571)272-8567. The examiner can normally be reached 9:00-3:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NATHAN FLYNN can be reached at (571)272-1915. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
SUMAIYA A. CHOWDHURY
Examiner
Art Unit 2421
/SUMAIYA A CHOWDHURY/Primary Examiner, Art Unit 2421