DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claim(s) December 16, 2025 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The examiner must, however, address any arguments presented by the applicant which are still relevant to any references being applied.
In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., “asynchronous shared viewing,” Remarks, p. 13) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 21, 28, 35, 41, 44, and 47 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Ketkar (US 2012/0030587), Phillips et al. (US 2012/0117017), Roberts et al. (US 2011/0113440), and Paul (US 2017/0187770).
Regarding claim 21, Ketkar teaches system for asynchronous consumption of streaming media, the system comprising:
at least one database ([0033]-[0034]); and
a server in network communication with the at least one database ([0033], [0034], “Media locator 104 may be resident in user equipment device 102, a remote server, or partially stored on both. … Media locator 104 may search in social media content source 106, where media may be tagged with information from previous recommendations, as well as data from social media data source 108.” [0055], Fig. 1), the server configured to perform operations including:
linking a first user profile registered with a media streaming application with a second user profile registered with the media streaming application for asynchronous interaction with a shared multimedia content on the media streaming application based on receiving, at the server, an acceptance indication from a second user device associated with the second user profile of a multimedia content sharing request generated by a first user device associated with the first user profile ([0075], “The social media guidance application may aid the user in making recommendations by automatically posting program recommendations to his profile. FIG. 6 shows illustrative sharing preferences screen 600 for a social media guidance application. The application may monitor the user's viewing activity and instruct processing circuitry 206 (FIG. 2) to automatically update the user's social network. … The user may further choose between publishing the viewing activity to one or more social networks (option 604), recommending the program to one or more friends (option 606), sending the information to a community metadata set (option 608), or any combination thereof.” [0079], “FIG. 9 shows an illustrative example of a user receiving recommendation 902 to add another user to their social network (or friend list). … The user may have the option to either add the suggested user as a friend (option 904), or ignore the suggestion (option 906).” [0091], “The user may choose to view the media, save the media for later viewing, record the next airing of the program to, e.g., storage 208 (FIG. 2), set a reminder for the next airing of the program, or ignore the program recommendation.”);
enabling, based on the linking, viewing activity of both a first user associated with the first user profile and a second user associated with the second user profile to be shared and synced with one another ([0075], “The social media guidance application may aid the user in making recommendations by automatically posting program recommendations to his profile. FIG. 6 shows illustrative sharing preferences screen 600 for a social media guidance application. The application may monitor the user's viewing activity and instruct processing circuitry 206 (FIG. 2) to automatically update the user's social network. … The user may further choose between publishing the viewing activity to one or more social networks (option 604), recommending the program to one or more friends (option 606), sending the information to a community metadata set (option 608), or any combination thereof.” [0079], “FIG. 9 shows an illustrative example of a user receiving recommendation 902 to add another user to their social network (or friend list). … The user may have the option to either add the suggested user as a friend (option 904), or ignore the suggestion (option 906).”).
Ketkar teaches the limitations specified above; however, the combination does not expressly teach wherein the enabling comprises enabling either the first user profile or the second user profile to influence an appearance of content overlaid within the shared multimedia content that is representative of asynchronous progress of the other user profile within the shared multimedia content; identifying, during playback of the shared multimedia content, one or more interaction activities with the shared multimedia content, and a playback position associated with each of the one or more interaction activities, facilitated by the first user profile or the second user profile, wherein identifying the one or more interaction activities comprises: identifying a first selection, by the first user profile or the second user profile, of an overlay control icon; generating a list of the one or more interaction activities available for an additional selection in response to the first selection of the overlay control icon, wherein the list comprises a plurality of different types of the one or more interaction activities; and detecting a second selection of one of the plurality of different types of the one or more interaction activities in the list; and transmitting, to another of the first user profile or the second user profile, an indication of the one or more interaction activities in synchronization with playback of the shared multimedia content by the another of the first user profile or the second user profile, such that the indication for each of the one or more interaction activities are displayed at the respective playback position during asynchronous viewing.
Phillips teaches:
enabling either a first user profile or a second user profile to influence an appearance of content overlaid within a shared multimedia content that is representative of asynchronous progress of the other user profile within the shared multimedia content ([0093], “Use Case 1: Family Sits Down for Weekly Watching.” [0094], “1. The Johnson family (Dad, Mom, Jen, and Rob) sits down together to watch TV on a Tuesday night.” [0095], “2. Dad turns on the TV and the enhanced media device 10/62 that they've been using.” [0096], “3. The media device 10/62 determines that the whole family is in the room and determines what TV series they have all been watching and where they are in watching in relation to one another.” [0097], “4. A hierarchical list of video series item recommendations is generated and presented to the family as illustrated in FIG. 13. At the top of the list is the television series Glee.” [0098], “5. Glee was on last night and all of the members of the family have seen all of the previous episodes of this series, but none have seen the most recent episode, which is episode 7.” [0099], “6. The system suggests to the family that they watch episode 7 of Glee together.” [0100], “7. The family agrees and Dad selects to watch the seventh episode of Glee.” [0101], “8. Playback begins.” Fig. 13, a window is overlaid within a shared multimedia content, i.e., “Glee,” that is representative of asynchronous progress of another user profile within the shared multimedia content).
In view of Philips’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination wherein the enabling comprises enabling either the first user profile or the second user profile to influence an appearance of content overlaid within the shared multimedia content that is representative of asynchronous progress of the other user profile within the shared multimedia content. The modification would serve to further facilitate content discovery and selection for users.
The combination teaches the limitations specified above; however, the combination does not expressly teach identifying, during playback of the shared multimedia content, one or more interaction activities with the shared multimedia content, and a playback position associated with each of the one or more interaction activities, facilitated by the first user profile or the second user profile, wherein identifying the one or more interaction activities comprises: identifying a first selection, by the first user profile or the second user profile, of an overlay control icon; generating a list of the one or more interaction activities available for an additional selection in response to the first selection of the overlay control icon, wherein the list comprises a plurality of different types of the one or more interaction activities; and detecting a second selection of one of the plurality of different types of the one or more interaction activities in the list; and transmitting, to another of the first user profile or the second user profile, an indication of the one or more interaction activities in synchronization with playback of the shared multimedia content by the another of the first user profile or the second user profile, such that the indication for each of the one or more interaction activities are displayed at the respective playback position during asynchronous viewing.
Roberts teaches:
Identifying, during playback of the shared multimedia content, one or more interaction activities with shared multimedia content, and a playback position associated with each of the one or more interaction activities, facilitated by a first user or a second user; and transmitting, to another of a first user profile or a second user profile, an indication of the one or more interaction activities in synchronization with playback of the shared multimedia content by the another of the first user profile or the second user profile, such that the indication for each of the one or more interaction activities are displayed at the respective playback position during asynchronous viewing ([0044], “As the screenshot 600 illustrates, comments have been inserted at various points in the timeline 601 of the media program. The spheres in the screenshot 600 can indicate that a comment has been inserted at that particular point in the timeline 601 of the media program. For example, a single comment 602, a single comment 604, a single comment 606, and four comments 608 have been inserted into the timeline 601. Once the user-generated comments are associated with their respective media content, the user can utilize the client program to transmit the media content, commentary/comments, and/or links to the content and commentary to the server 510.” [0049], “In step 704, as the media content progresses, the first user can make comments regarding media content at different times during the playing of the media content. In step 706, the first user ends the media evaluation session when the media content ends. In step 708, at a different time, a second user signals the server 510 to start a media evaluation session using his own communication device, like communication device 512, of the same media content previously evaluated by the first user. In step 710, the second user makes his own comments during the playing of the media content and finishes his media evaluation. In step 712, the client program included in the server 510 synchronizes the asynchronous commentary timelines of the first and second users. In step 714, the client program inserts the synchronized comments in a temporal vicinity of the media content as, for example, an overlay as shown in FIG. 6. In step 716, additional users can add there own comments or commentary while playing the media content using a media device such as media devices 502, 504, and 506. The client program in server 510 can then combine all of the comments of all of the users into a single commentary timeline as at step 718.”).
In view of Roberts’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination to include identifying one or more interaction activities with the shared multimedia content, and a playback position associated with each of the one or more interaction activities, facilitated by the first user profile or the second user profile; and transmitting, to another of the first user profile or the second user profile, an indication of the one or more interaction activities in synchronization with playback of the shared multimedia content by the another of the first user profile or the second user profile, such that the indication for each of the one or more interaction activities are displayed at the respective playback position during asynchronous viewing.
The combination teaches the limitations specified above; however, the combination does not expressly teach wherein identifying the one or more interaction activities comprises: identifying a first selection, by the first user profile or the second user profile, of an overlay control icon; generating a list of the one or more interaction activities available for an additional selection in response to the first selection of the overlay control icon, wherein the list comprises a plurality of different types of the one or more interaction activities; and detecting a second selection of one of the plurality of different types of the one or more interaction activities in the list.
Paul teaches:
identifying a first selection of an overlay control icon ([0088], “as shown in FIG. 3E and in response to the user selecting the comment control 310b, the social networking application 130 provides an activity control list 334 within the touch screen display 302a of the client computing device 300a.” Figs. 3D-3E);
generating a list of one or more interaction activities available for an additional selection in response to the selection of the overlay control icon ([0088], “For example, as shown in FIG. 3E and in response to the user selecting the comment control 310b, the social networking application 130 provides an activity control list 334 within the touch screen display 302a of the client computing device 300a. In one or more embodiments, the activity control list 334 includes activity controls 336a-336d. In at least one embodiment, the user of the client computing device 300a can instantly share only the selected portion of the digital video with additional social networking system users by selecting the activity control 336a.” Figs. 3D-3E),
wherein the list comprises a plurality of different types of the one or more interaction activities ([0088], “In one or more embodiments, the activity control list 334 includes activity controls 336a-336d.” Figs. 3D-3E); and
detecting a second selection of one of the plurality of different types of the one or more interaction activities in the list ([0088], “In one or more embodiments, the activity control list 334 includes activity controls 336a-336d. In at least one embodiment, the user of the client computing device 300a can instantly share only the selected portion of the digital video with additional social networking system users by selecting the activity control 336a.” [0089], “Furthermore, in at least one embodiment and in response to the user selecting the activity control 336b, the video manager 106 can enable the user to compose a post directed at either the full digital video or only at the selected portion of the digital video. Similarly, in response to the user selecting the activity control 336c, the video manager 106 can enable the user to send either the full digital video or the selected portion of the digital video as part of an electronic message. Finally, in response to the user selecting the activity control 336d, the video manager 106 can provide the user with a hyperlink that the user can copy into other applications that is directed to either the full digital video or the selected portion of the digital video.” Figs. 3D-3E).
In view of Paul’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination wherein identifying the one or more interaction activities comprises: identifying a first selection, by the first user profile or the second user profile, of an overlay control icon; generating a list of the one or more interaction activities available for an additional selection in response to the first selection of the overlay control icon, wherein the list comprises a plurality of different types of the one or more interaction activities; and detecting a second selection of one of the plurality of different types of the one or more interaction activities in the list. The modification would serve to facilitate navigation and selection of additional user functions. The modification would serve to enhance convenience for users.
The grounds of rejection of claim 21 under 35 USC § 103 are similarly applied to claim 28.
Regarding claim 35, Ketkar teaches a non-transitory computer-readable medium storing computer-executable instructions which, when executed by a server in network communication with at least one database ([0033], [0034], “Media locator 104 may be resident in user equipment device 102, a remote server, or partially stored on both. … Media locator 104 may search in social media content source 106, where media may be tagged with information from previous recommendations, as well as data from social media data source 108.” [0039]-[0040], Figs. 1-2). The grounds of rejection of claim 21 under 35 USC § 103 are similarly applied to the remaining limitations of claim 35.
Regarding claims 41, 44, and 47, the combination further teaches wherein the one or more interaction activities comprises one or more of: comments, reactions, or favorite moment designations (Roberts: [0044], “As the screenshot 600 illustrates, comments have been inserted at various points in the timeline 601 of the media program. The spheres in the screenshot 600 can indicate that a comment has been inserted at that particular point in the timeline 601 of the media program. For example, a single comment 602, a single comment 604, a single comment 606, and four comments 608 have been inserted into the timeline 601. Once the user-generated comments are associated with their respective media content, the user can utilize the client program to transmit the media content, commentary/comments, and/or links to the content and commentary to the server 510.”).
Claim(s) 22, 24, 29, 31, 36, and 38 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ketkar, Phillips, Roberts, Paul, and Bagga et al. (US 2016/0142783).
Regarding claims 22, 29, and 36, the combination teaches the limitations specified above; however, the combination does not expressly teach wherein the server is configured to perform operations further including:
identifying, based at least on past viewing activity associated with the second user, multimedia content to recommend to the first user; and
constructing, based on the identifying, a recommendation rail for the first user profile including the identified multimedia content, wherein the identified multimedia content comprises a plurality of multimedia content options.
Bagga teaches:
identifying, based at least on past viewing activity associated with a second user, multimedia content to recommend to a first user ([0093], “In step 801, a computing device such as an application server 107 may monitor media consumption of a plurality of users. … Such media consumption scores may be associated with a user profile and with their respective media assets.” [0094], “By monitoring the media consumption of a plurality of users, in step 802, similar users may be identified. Users are found to be similar if they have similar media consumption patterns and preferences. If two or more users have similar favorite media asset types and similar favorite media assets and media asset series, then these users may be determined to be similar to one another. … Identifying similar users is useful for generating personalized media recommendations to the user. Media content of interest to a user identified to have similar preferences as a given user may be recommended to the given user.” [0095], Figs. 3A-3B, 8A);
constructing, based on the identifying, a single recommendation rail for the first user profile including the identified multimedia content ([0095], “The user's navigation of the personalized media interface may also initiate a request to view menu pages such as those of FIGS. 3A-B, and consequently may initiate a request to generate personalized menus.” [0096], “If such a request has been received, then in step 804, the menu category for the personalized menu to be generated may be identified.” Figs. 3A-3B, 8A),
wherein the identified multimedia content comprises a plurality of multimedia content options (Figs. 3A-3B).
In view of Bagga’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ketkar wherein the server is configured to perform operations further including: identifying, based at least on past viewing activity associated with the second user, multimedia content to recommend to the first user; and constructing, based on the identifying, a recommendation rail for the first user profile including the identified multimedia content, wherein the identified multimedia content comprises a plurality of multimedia content options. The modification would serve to facilitate content discovery and selection for users. The modification would thereby improve the user experience.
Regarding claims 24, 31, and 38, the combination further teaches wherein the plurality of multimedia content options include a combination of shows and/or movies (Bagga: [0040], “For example, menu 304 is a personalized menu containing media assets 312, 314, 316, 318, and 320 related to cooking television shows. Menu 306 is a personalized menu containing listings for suspense and/or thriller movies with a tense tone. Menu 308 may contain media asset listings for television shows with a genre of action and/or a theme of superheroes.” Figs. 3A-3B).
Claim(s) 27 and 34 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Ketkar, Phillips, Roberts, Paul, Bagga, and Poling et al. (US 2008/0046928).
Regarding claims 27 and 34, the combination teaches the limitations specified above; however, the combination does not expressly teach wherein each of the plurality of multimedia content options comprises a content preview visual.
Poling teaches each of a plurality of multimedia content options comprises a content preview visual ([0028], “FIG. 3 illustrates another example screenshot 300 of a unified discovery interface presenting broadcast content together with non-broadcast content. For example, a bold line around a non-broadcast content region 308 indicates that a user has navigated to the non-broadcast IP content region 308 associated with the content provider of channel ‘4,’ (i.e., ABC channel KOMO). Within the non-broadcast content region 308, the user may navigate between multiple non-broadcast content (e.g., IP content) offerings provided by that content provider (or associated content providers). In the implementation of FIG. 3, for example, a user has navigated to a ‘Lost’ episode 310, called ‘The Other 48 Days’, that is available as IP content within the non-broadcast content region 308. Other options shown in FIG. 3 for the same content provider include ‘Extreme Makeover,’ Boston Legal,’ Grey's Anatomy,’ ‘Less than Perfect,’ and ‘Desperate Housewives,’ although other IP content may also be available (e.g., from navigating further right or left within the non-broadcast content region 308.” Fig. 3).
In view Poling’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination such that each of the plurality of multimedia content options comprises a content preview visual. The modification would serve to facilitate content navigation and selection for users.
Claim(s) 42, 45, and 48 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Ketkar, Phillips, Roberts, Paul, and Yang et al. (US 2023/0007064).
Regarding claims 42, 45, and 48, the combination further teaches wherein the server is configured to perform operations further including:
presenting the shared multimedia content on the another of the first user profile or the second user profile (Roberts: [0044], “As the screenshot 600 illustrates, comments have been inserted at various points in the timeline 601 of the media program. The spheres in the screenshot 600 can indicate that a comment has been inserted at that particular point in the timeline 601 of the media program. For example, a single comment 602, a single comment 604, a single comment 606, and four comments 608 have been inserted into the timeline 601. Once the user-generated comments are associated with their respective media content, the user can utilize the client program to transmit the media content, commentary/comments, and/or links to the content and commentary to the server 510.”).
However, the combination does not expressly teach displaying, along a length of a playback bar in the shared multimedia content, icon representations of the one or more interaction activities.
Yang teaches displaying, along a length of a playback bar in a shared multimedia content, icon representations of one or more interaction activities ([0085], “Furthermore, progress bar 216 comprises emoji/sticker/shape indicators 1008 marking timestamps at which emojis, shapes, and/or stickers have been applied to frames of the video data. Indicators 1008 may also mark timestamps of one or more other types of annotations, such as comments and/or drawing annotations. According to some embodiments, the visual depiction of each of indicators 1008 reflects one or more characteristics of the represented annotation. In the example of FIG. 10, each indicator 1008 is an example emoji/sticker/shape applied to the frame. The emoji/sticker/shape used as the visual representation of an indicator 1008 may be selected in any way, e.g., based on the emoji with the highest number of applications to the frame, the emoji/sticker/shape that was first applied to the frame, etc. Distinct visual representation aspects for indicators 1008 may correspond to the different types of annotation data. Furthermore, the visual representation of an indicator may include an identifier of a user that created the represented annotation, or of a set of users that have generated annotations on the frame. According to some embodiments, selection of one of indicators 1008 causes generation of a video control command that causes the video data to jump to the timestamp marked by the selected indicator.” Fig. 10).
In view of Yang’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination to include displaying, along a length of a playback bar in the shared multimedia content, icon representations of the one or more interaction activities. The modification would serve to further enhance collaboration and interaction with other users.
Claim(s) 43, 46, and 49 is/are rejected under 35 U.S.C. 103 as being unpatentable over a combination of Ketkar, Phillips, Roberts, Paul, Yang, and Anker et al. (US 2019/0208279).
Regarding claims 43, 46, and 49, the combination teaches the limitations specified above; however, the combination does not expressly teach further teaches wherein a visual appearance of each of the icon representations is based on a type of the one or more interaction activities.
Anker teaches wherein a visual appearance of each of icon representations is based on a type of one or more interaction activities ([0028], “FIGS. 2A-2D illustrate various interfaces (i.e., interfaces 200A-200D) related to connected TV comments and reactions.” [0030], “FIG. 2B illustrates an example interface 200B that additionally includes comment 155.” [0031], “In some embodiments, reactions 170 may include a reaction 170A that indicates that user 101 likes the selected video content 140, a reaction 170B that indicates that user 101 loves the selected video content 140, a reaction 170C that indicates that user 101 thinks the selected video content 140 is funny, a reaction 170D that indicates that user 101 is wowed by the selected video content 140, a reaction 170E that indicates that user 101 is saddened by the selected video content 140, and a reaction 170F that indicates that user 101 is angered by the selected video content 140.”).
In view of Anker’s teaching, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination wherein a visual appearance of each of the icon representations is based on a type of the one or more interaction activities. The modification would aid users in identifying different types of comments. The modification would thereby improve the user experience.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL R TELAN whose telephone number is (571)270-5940. The examiner can normally be reached 9:30AM-6:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nasser Goodarzi can be reached at (571) 272-4195. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL R TELAN/ Primary Examiner, Art Unit 2426