DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This is in response to applicant’s amendment/response filed on 02/26/2026, which has been entered and made of record. Claims 4, 10, 12, and 16 have been amended. Claim 1 has already been cancelled. No Claim has been added. Claims 2-19 are pending in the application.
Response to Arguments
Applicant’s arguments with respect to independent Claims and dependent claims have been fully considered but they are not persuasive.
Applicant submits that “The publication date of Liu is October 29, 2021, while the priority date of Applicant's pending application is June 30, 2021. While the filing date of Liu, January 18, 2021, is before Applicant's priority date, Liu is not prior art under 35 U.S.C. § 102." (Remarks, p. 7).
The examiner disagrees with Applicant’s arguments.
Reference Liu is used to reject limitations “identifying a second frame of the media content, wherein closed caption data for the second frame of the media content comprises the at least one keyword from the closed caption data for the first frame of the media content;”. The limitations were not recited in the parent patent application 17/363,767, but introduced in the instant application filed on May 30, 2024. Therefore Liu is a prior art for the specific limitations.
The arguments regarding dependent claims for the virtue of their dependency are moot because the independent claims are not allowable.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Blattner et al. (US 20180054466 A1), and in view of Liu et al. (CN 113569037 A), and further in view of Wilhite et al. (US 20170246545 A1).
Regarding Claim 8, Blattner A system (¶23 reciting “a system”) comprising:
input/output circuitry configured to: (¶87 and Fig. 6 disclosing client devices 602a and 602b. Claim 18 reciting “A system for enabling perception of multiple online personas in an instant messaging communications session, the system comprising a processor connected to a storage device and one or more input/output devices”)
receive media content for display in a group activity session; (¶89 reciting “The message then is received by the second client 602b (step 614). Upon receipt of the message, the second client 602b displays the message in a user interface in which messages from the user of the first client 602a are displayed.”)
generate for display a display screen including the media content and at least one image, wherein the at least one image corresponds to a respective user in the group watch session; (Fig. 2 showing an image/avatar for each user; and ¶71 reciting “The processor displays a user interface for the instant messaging session including the avatar associated with the sender and wallpaper applied to the user interface over which the avatar is displayed (step 307).” ¶89 disclosing displaying the received message(i.e. the media content), and reciting “Upon receipt of the message, the second client 602b displays the message in a user interface in which messages from the user of the first client 602a are displayed.”)
control circuitry configured to: (Fig. 6, host system 604)
identify a first frame of the media content; (¶72 reciting “The processor receives text of a Message entered by the sender to be sent to the instant message recipient (step 310) and sends a message corresponding to the entered text to the recipient (step 315).”)
access closed caption data for the first frame of the media content;
extract at least one keyword from the closed caption data for the first frame of the media content;
(¶51 disclosing a keyword “LOL”, and reciting “the text of a message may include a character string “LOL,” which is an acronym that stands for “laughing out loud. The processor compares the text of the message to multiple animation triggers that are associated with the avatar projected by the sender (step 320). A trigger may include any letter, number, or symbol that may be typed or otherwise entered using a keyboard or keypad.” Further, ¶75 disclosing extracting a trigger character string (i.e. a keyword) in the received message (i.e. a frame of the media content), and reciting “Referring again to FIG. 3, the processor determines whether a trigger is included within the message (step 325).”)
identify a second frame of the media content, wherein the second frame of the media content comprises the at least one keyword from the closed caption data for the first frame of the media content; (¶73 reciting “Referring also to FIG. 4, examples 400 of triggers associated with animations 405a-405q of a particular avatar model are shown. . . As illustrated, a trigger may be an English word, such as 415a . . . Other examples of a trigger include a particular abbreviation, such as “lol” 411n, and an English phrase, such as “Oh no” 415e.”, where the animations correspond to a second frame of the media content)
select a second portion from the second frame of the media content; and
modify a background of the at least one image to include the selected second portion from the second frame of the media content.
(¶69 reciting “the text of a message sent to an instant message recipient is searched for an animation trigger and, when a trigger is found, the avatar that represents the instant message sender is animated in a particular manner based on the particular trigger that is found. The wallpaper displayed for the avatar includes an animated object or animated objects. The object or objects may be animated based on the content of the instant message sent”. The wallpaper animation corresponds to a modified background including the second frame.)
However, Blattner does not explicitly disclose to identify a second frame of the media content, wherein closed caption data for the second frame of the media content comprises the at least one keyword;
Liu teaches “a message processing method, apparatus, and readable storage medium, which can improve the efficiency of session message generation.” ([n0004]). More specifically, Liu teaches to identify a second frame (401e) based on the closed caption data of the second frame comprising the at least one keyword (e.g. keyword “Huangshan”), and recites “the computer device can . . . display the event information 401e corresponding to the matching interactive event 401d, as shown in the session interface 400c. The event information 401e may include the image data (e.g., a photo taken in Huangshan) contained in the matching interactive event 401d, as well as the corresponding event location information and event description information. For example, the event location information related to "Huangshan" at this time is the location where the above image data was taken: "Tangkou Town, Huangshan District, Huangshan City, Anhui Province" ([n0092]).
It would have been obvious to one with ordinary skill, before the effective filing date of the claimed invention, to modify the system (taught by Blattner) to identify a second frame based on the closed caption data of the second frame comprising the keyword (taught by Liu). The suggestions/motivations would have been that “the efficiency of obtaining event information can be improved, and the efficiency of generating session messages can also be improved.” ([n0043]), and to apply a known technique to a known device (method, or product) ready for improvement to yield predictable results.
However, Blattner in view of Liu does not explicitly disclose the group activity session being a group watch session.
Wilhite teaches “a game chat app” (ABST). ¶23 teaches a user can join a chatroom for a group chat session on a live sports event including two teams, and recites “In this example, the chatroom corresponds with the Golden State Warriors basketball team. . . an update is being automatically pushed into the Dub Nation chatroom regarding a live game involving the followed team. The update includes a short synopsis of the current state of the game (e.g., that the game is in the third quarter, what the score is, and game statistics of the leading player for each team)”. As shown in Fig. 6, a live sports event is displayed with chat.
It would have been obvious to one with ordinary skill, before the effective filing date of the claimed invention, to modify the system (taught by Blattner in view of Liu) to enable a user to join a group watch/chat session with a live sports event (taught by Wilhite). The suggestions/motivations would have been to solve the problem that “it may be difficult for multiple parties to socialize with each other about a recreational event, while simultaneously accessing event information normally supplied by recreational apps. In other words, it may be difficult for separate parties located remote from each other to simultaneously utilize multiple apps and experience the same recreational information. This may be especially true when the recreational event is live.“ (¶4-6), and to apply a known technique to a known device (method, or product) ready for improvement to yield predictable results.
Regarding Claim 9, Blattner in view of Liu and Wilhite discloses The system of claim 8, wherein the control circuitry is further configured to modify the background of the at least one image by replacing the background of the at least one image with the selected second portion from the second frame of the media content. (Liu, [n0123] reciting “user B's client can render the original conversation interface into an anniversary interface based on the anniversary event information. For example, it can retrieve the image X sent by user A and user B on the anniversary day and then replace the background image of the conversation interface with image X.” The suggestions/motivations would have been the same as that of Claim 8 rejections.)
Regarding Claim 10, Blattner in view of Liu and Wilhite discloses The system of claim 8, wherein the control circuitry is further configured to:
retrieve user preference data from at least one user profile associated with a respective user in the group watch session; (Blattner, ¶141 disclosing retrieving user preference data (e.g. personality) based on user profile, and reciting “the features and functionality associated with the personality would be transparent to the instant message sender, and may be based upon one or more pre-selected profiles types when setting up the personality. For example, the instant message sender may be asked to choose from a group of personality types such as professional, management, informal, vacation, offbeat, etc.”) and
modify the background of the at least one image of the respective user based on the retrieved user preference data. (¶143 reciting “an instant message sender may assign a global avatar to all personalities, but assign different buddy sounds on a per-group basis to other personalities (e.g. work, family, friends), and assign buddy wallpaper and smileys on an individual basis to individual personalities corresponding to particular instant message recipients within a group.” In addition, ¶79 reciting “The triggers for the animation of wallpaper objects also may be user-configurable such that a user selects whether a particular type of animation is to be included, any animations are to be played, and triggers for one or more of the wallpaper objects.”)
Regarding Claim 11, Blattner in view of Liu and Wilhite discloses The system of claim 10, wherein the control circuitry is further configured to modify the background of the at least one image by:
selecting a visual effect based on the retrieved user preference data; (Blattner, ¶101 disclosing selecting animation (i.e. a visual effect) based on personality, and reciting “In one example of an avatar personality, an avatar named SoccerBuddy (not shown) is associated with an energetic personality. In fact, the personality of the SoccerBuddy avatar may be described as energetic, bouncy, confidently enthusiastic, and youthful. The SoccerBuddy avatar's behaviors reflect events in soccer matches. For example, the avatar's yell animation is an “ole, ole, ole” chant, his big-smile animation is “gooooooaaaaaallllll,” and, during a frown animation or a tongue-out animation, the avatar shows a yellow card.” In addition, Fig. 6, 616b reciting the Client 2 602 b “Search text of message for animation triggers to identify a type of animation to play”. Therefore, Plattner discloses to identify an animation, i.e. a visual effect, based on user profile).and
generating for display the at least one image with the visual effect applied hereto. (Blattner, ¶101)
Regarding Claim 12, Blattner in view of Liu and Wilhite discloses The system of claim 8, wherein the control circuitry is further configured to modify the background of the at least one image by:
selecting a visual effect based on audio data or text data provided by a user profile during the group watch session; (Blattner, ¶141 disclosing selecting the personality based on user profile, and reciting “the features and functionality associated with the personality would be transparent to the instant message sender, and may be based upon one or more pre-selected profiles types when setting up the personality. For example, the instant message sender may be asked to choose from a group of personality types such as professional, management, informal, vacation, offbeat, etc.” In addition, Fig. 6, 616b reciting the Client 2 602 b “Search text of message for animation triggers to identify a type of animation to play”. Therefore, Plattner discloses to identify an animation, i.e. a visual effect, based on user profile.) and
generating for display the at least one image with the visual effect applied thereto. (Blattner, Fig. 6, 620b: “Play the identified animation for the first avatar that is associated with the user of Client 1”)
Regarding Claim 13, Blattner in view of Liu and Wilhite discloses The system of claim 8, wherein the at least one image comprises an avatar corresponding to a respective user in the group watch session. (Blattner, ¶59 reciting “The sender avatar 135 or the recipient avatar 115 may be animated to reflect the weather at the geographic locations of the sender and the recipient, respectively. For example, if rain is falling at the geographic location of the sender, then the sender avatar 135 may be animated to put on a rain coat or open an umbrella. The wallpaper corresponding to the sender avatar 135 also may include rain drops animated to appear to be failing on the sender avatar 135.”)
Claim 2, has similar limitations as of Claim(s) 8, therefore it is rejected under the same rationale as Claim(s) 8.
Claim 3, has similar limitations as of Claim(s) 9, therefore it is rejected under the same rationale as Claim(s) 9.
Claim 4, has similar limitations as of Claim(s) 10, therefore it is rejected under the same rationale as Claim(s) 10.
Claim 5, has similar limitations as of Claim(s) 11, therefore it is rejected under the same rationale as Claim(s) 11.
Claim 6, has similar limitations as of Claim(s) 12, therefore it is rejected under the same rationale as Claim(s) 12.
Claim 7, has similar limitations as of Claim(s) 13, therefore it is rejected under the same rationale as Claim(s) 13.
Claim 14, has similar limitations as of Claim(s) 8, therefore it is rejected under the same rationale as Claim(s) 8.
Claim 15, has similar limitations as of Claim(s) 9, therefore it is rejected under the same rationale as Claim(s) 9.
Claim 16, has similar limitations as of Claim(s) 10, therefore it is rejected under the same rationale as Claim(s) 10.
Claim 17, has similar limitations as of Claim(s) 11, therefore it is rejected under the same rationale as Claim(s) 11.
Claim 18, has similar limitations as of Claim(s) 12, therefore it is rejected under the same rationale as Claim(s) 12.
Claim 19, has similar limitations as of Claim(s) 13, therefore it is rejected under the same rationale as Claim(s) 13.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YI WANG whose telephone number is (571)272-6022. The examiner can normally be reached 9am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571)272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YI WANG/Primary Examiner, Art Unit 2619