DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment in the Specification
The amendment filed 01/26/2026 in the specification is objected to under 35 U.S.C. 132(a) because it introduces new matter into the disclosure. 35 U.S.C. 132(a) states that no amendment shall introduce new matter into the disclosure of the invention. The added material which is not supported by the original disclosure is as follows: The specification, in paragraph [0039] introduces new matter of “Thus, as indicated above, the commentary can be typed commentary (e.g., text, ideograms, ASCII depictions of facial expression, hash tags, punctuation, etc.”.
Applicant is required to cancel the new matter in the reply to this Office Action.
Response to Arguments
Applicant’s arguments with respect to claims 1-11, 13-21, 23, 25-27 have been considered but are moot based on new ground of rejection discussed below.
Applicant argues paragraph [0039] of the specification has been amended to indicate that the commentary can be “typed commentary”. Writing description support is provided in the existing text of paragraph [0039]. No new matter has been added by any of the specification amendment (page 10). This argument is respectfully traversed.
Paragraph 0039 describes “sentiment data 22 may be expressed using ideograms, ASCII…or any natural language utterance or natural language writing…”. The description in paragraph [0039] could be performed by utterance. The term “natural language writing” could be performed by pencil/pen…but not “typing”. Therefore, the newly added term “typed commentary” does not have support in the originally-filed specification.
Applicant further argues written description for newly added limitation “the graphical avatar is anthropomorphic” is clearly provided, e.g., by figures 6 and 8 (page 10). Examiner respectfully disagrees.
Figures 6, 8 and also paragraph 00112 describes avatar images 302A-E, series of emoji ideograms 304A-E. Figures 6, 8 or the entire specification does not have support for limitation “the graphical avatar is anthropomorphic”.
Applicant also argues written limitation support of newly added limitation “reaction of the plurality of audience members during the broadcast or stream to the video” is provided, e.g., by paragraph [0086] of the Specification (page 10). Examiner respectfully disagrees.
Paragraph [0086] merely describes “…the use of capitalization can be indicative of either positive sentiment (“So excited”) or negative sentiment “Boooo”…” Paragraph [0086] neither mention “…during the broadcast or stream to the video” nor have support for “reaction of the plurality of audience members during the broadcast or stream to the video”.
With respect to Applicant argues “during interview, both examiners mentioned a brief that prior art exists art in which television programs allegedly have overlaid indications of audience sentiment. However, no such prior art has been cited in any refusal. Thus, if the Office wishes to rely on such prior art, Applicant respectfully submitted that such prior art should be made of record and should be cited in a refusal, so that the Applicant is provided with an adequate opportunity to respond.” (page 11).
It is noted that amended claim 1 (and other independent claims) does not recite “overlaid indications of audience sentiment”. Instead, amended independent claim 1 recites “by performing one of: …to overlay onto the video a graphical avatar…; or ..overlaying the graphical avatar that exhibits the at least one aggregate sentiment onto the video…”.
It is also noted that the limitation “..to overlay onto the video a graphical data” or “…overlaying the graphical avatar that exhibits the at least one aggregate sentiment onto the video” is recited as an alternative. Therefore, the prior art only need to disclose either one of the two limitation.
Furthermore, as repeatedly pointed out during the interview and also clearly cited in the non-final rejection, page 7, Sarkar discloses overlaying the graphical element 432, 418, 430, 428 that exhibits the at least one aggregate sentiment/reaction onto the video. Pages 9-10 of the non-final rejection also pointed out that Imamura discloses …overlay onto the video a graphical avatar that exhibits the at least one aggregate sentiment to the audience members via the graphical avatar (overlay onto the video a graphical avatar (140a, b, c 156, 158 – see for example, figure 8) that exhibits the at least one aggregate result/emotion to the users via the graphical avatar…
Applicant argues the combination does not disclose “obtaining, from a plurality of audience members that are recipients of a broadcast or stream of a video, sentiment data indicating reactions, during the broadcast or stream, of the plurality of audience members to the video, the sentiment data including typed commentary from the recipients regarding the video,” because in MV, the audience reaction data corresponds to photographs of the audience, not typed commentary. Although MV disclose that social media information is gathered from user, the social media information is gathered before a video screening. Thus, the social media information does not indicate “reactions, during the broadcast or stream, of the plurality of audience members to the video” because the social information is gathered prior to the screened event. In Sarkar, the audience reactions are also not typed commentary. Instead, they are physical reactions captured as video clips. See Sarkar abstract “the system and methods for present video clips of other viewers’ reactions to a viewer device during a live-video stream broadcast. In Imamura, there is no typed commentary audience reactions (pages 11-12).
In response, with respect to Applicant argument that MV does not disclose “sentiment data indicating reactions, during the broadcast or stream…”, Examiner notices that MV discloses capturing one or more images of one or more audience member reactions while the one or more audience members view the multi-media content (paragraph 0003) or capturing images/reactions of audience members while the multi-media content is playing/streaming (paragraphs 0013, 0053); process upcoming or ongoing multi-media content release information (paragraph 0010), or MGHMC 312 can capture a stadium of audience member’s reactions while the audience members watch a video game tournament (paragraphs 0055, 0076). Since the reactions/moods are captured while the multi-media content is streaming/playing or the multi-media content is ongoing multi-media content, MV discloses “sentiment data indicating reactions, during the broadcast or stream” (while multi-media content is streaming or ongoing multi-media content).
In addition, Sarkar discloses this feature as shown in the Abstract and admitted by the Applicant as indicated above. See also Imamura’s disclosure of receiving sentiment/reactions of live stream event (read on “….during broadcast or stream”).
With respect to Applicant’s argument that MV, Sarkar or Imamura does not disclose “typed commentary”, Examiner notices that MV describes API 340 can determine mood and/or emotion of the audience members by analyzing a writing style from audience member’s social pots via a tone analyzer (paragraph 0048, claim 7, 14, 20). Thus, the writing style is meet the “natural language writing” described in the paragraph [0039] of the instant application. MV (paragraphs 0011-0012), Sarkar discloses user provide comments, instant message, or providing a text box in which a broadcaster may input a comment in response to the video clip within the video-graphical element 312 (paragraphs 0035, 0071-0073, 0087, figure 3A), Imamura also disclose providing comments (paragraph 0117). Since the comment is provided as instant message or inputted by a user/broadcaster using a text box, it is obvious that the comments/message are typed message/commentary.
It is also noted that independent claims 1 and 15 recites “the sentiment data including typed commentary from the recipients regarding the video or physical reactions of the audience members to the video”. Thus, the “typed commentary” or “physical reactions” recited in the claims are recited as an alternative limitation. The prior art only need to disclose either “typed commentary” or “physical reactions” but not required to disclose both “physical reactions” and “typed commentary”. MV or Sarkar or Imamura discloses alternative limitation “the sentiment data including physical reactions of the audience members to the video” as discussed above.
Although it is not required to provide prior art for teaching of “typed commentary” is “typed commentary” is recited in the claims and an alternative limitation as discussed above, Official Notice is also taken that using “typed commentary” is well-known in the art for providing comments, chat, etc.,. See also Wang (US 20170359619 : paragraphs 0003-0004); Roberts et al. (US 8,839,306: col. 8, lines 45-49); Archibong (US 20140068692: paragraph 0104) for the teaching of “typed commentary”.
Applicant also argues there is no reason to combine MV with either Sarkar or Imamura; the “predictable result” rational set in the Office Action date 1/24/2025 does not support the rejection (pages 12-15), Examiner respectfully disagrees.
Discussing the question of obviousness of claimed subject matter involving a combination of known elements, KSR Int'l v. Teleflex, Inc., 127 S. Ct. 1727 (2007), explains: When a work is available in one field of endeavor, design incentives and other market forces can prompt variations of it, either in the same field or a different one. If a person of ordinary skill can implement a predictable variation, § 103 likely bars its patentability. For the same reason, if a technique has been used to improve one device, and a person of ordinary skill in the art would recognize that it would improve similar devices in the same way, using the technique is obvious unless its actual application is beyond his or her skill. Sakraida [v. AG Pro, Inc., 425 U.S. 273 (1976)] and Anderson ’s-Black Rock, Inc. v. Pavement Salvage Co., 396 U.S. 57, 163 USPQ 673 (1969). In this case, all claimed limitations are known by prior art. Nowhere in the MV reference prevent the combination of the teachings of Sarkar and Imamura. The benefits of the combination is provided. Therefore, the combination is proper.
With respect to Applicant’s argument regarding the type of data are considered as non-functional descriptive material (page 11, item D, on pages 15-16), Examiner respectfully disagrees and maintains that particular types of data such as “typed commentary”, “sentiment data”, “graphical avatar is anthropomorphic” are non-functional descriptive material and not given patentable weight because the particular types of data are not change the structure or functional operation of the system that providing aggregate/total user reactions/selections onto a video.
For reasons given above, rejections of claims 1-11, 13-21, 23, 26-28 are discussed below.
Claims 12, 22, 24-25 have been canceled.
Again, it is noted that non-functional descriptive material does not patentably distinguish over prior art that otherwise renders the claims unpatentable. See for example, MPEP 2111.05, MPEP 2112.01(III). See also In re Ngai, 367 F.3d 1336, 1339 (Fed. Cir. 2004); Exparte Nehls, 88 USPQ2d 1883, 1887-90 (BPAI 2008) (precedential) (discussing cases pertaining to non-functional descriptive material) see also BPAI’s decision in Appeal 2009-010851 (for Ser. No. 10/622,876) or BPAI’s decision in Appeal 2011-011929 (for Ser. No. 11/709,170), pages 6-7. In this case, a particular type of information such as “typed commentary” “graphical avatar is anthropomorphic”, “sentiment data” could be considered as non-functional descriptive material and are not required to give patentable weight because these particular types of data do not functionally change the structure or operation of a system of providing aggregate user actions/feedback/selections as an overlay on video. the limitations “typed commentary” “graphical avatar is anthropomorphic”, “sentiment data” are any given patentable weight of types of data.
Although non-functional descriptive material are not required to be considered, all claim limitations including non-functional descriptive material are known by prior art as discussed below.
Also, although the Office action is not required to provide prior art for “typed commentary” since the limitation of “typed commentary” is recited in claims as an alternative limitation as discussed above, the rejection discussed below provides prior art for teaching of “typed commentary” as a clear evidence that the limitation is well-known.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
Claims 1-11, 13-21, 23, 26-28 are rejected under 35 U.S.C. 112(a) as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding claims 1-11, 13-21, 23, 26-28, each of independent claims 1, 15 recites limitations “…during the broadcast or stream…”, “typed commentary…”, “wherein the graphical avatar is anthropomorphic” which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention (see discussion in “response to arguments” above).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-5, 7-8, 10, 14-17, 19, 26-28 are rejected under 35 U.S.C. 103 as being unpatentable over MV et al (US 2020/0387934 A1) in view of Sarkar et al. (US 20180234738), Imamura et al. (US 20210158781) and further in view of Wang (US 20170359619) or Roberts et al. (US 8,839,306) or Archibong (US 20140068692).
Regarding claims 1 and 15, MV teaches a method and system comprising: one or more processors operatively connected to a memory (see include, but are not limited to, figures 3, 5), and configured to:
obtain, from a plurality of audience members that are recipients of a broadcast or stream of a video, sentiment data indicating reactions, during the broadcast or stream, of the plurality of audience members to the video (obtain, from a plurality of audience members that receives/views a broadcast or stream of video, mood or expression data indicating reactions, of the ongoing multi-media content or while the multi-media content is streaming/playing, of the plurality of audience members of the multi-media content comprising video – see include, but are not limited to, figures 3-5, paragraphs 0010, 0013-0015, 0048, 0052-0053), the sentiment data including commentary (with writing style) from the recipients regarding the video or physical reactions of the audience members to the video (data associated with mood or expression or reaction including comments/reactions with writing style from the audience members regarding the video or physical reactions/expression of the audience members to the video -see paragraphs 0013, 0010-0011, 0052-0053, 0076, and discussion in “response to arguments” above);
determine at least one aggregate sentiment of the plurality of audience members based on the sentiment data (determine at least one aggregate/total MGHMC by tracking or collecting responses/reactions of the plurality of audience members based on the moods/responses/expressions/reactions and/or social posts, etc. from each members for each scene/portion of the multi-media content – see include, but are not limited to, figure 7, paragraphs 0009, 0012, 0056, 0058, 0076, 0086-0087).
However, MV is silent in teaching facilitating augmentation of the broadcast or stream of the video to indicate the at least one aggregate sentiment in the video by performing one of:
transmit an instruction to a broadcast platform, which is a source of the broadcast or stream for the plurality of audience members, to overlay onto the video a graphical avatar that exhibits the at least one aggregate sentiment to the audience members via the graphical avatar, such that the augmented broadcast or stream indicates the at least one aggregate sentiment to the audience via the graphical avatar, wherein the graphical avatar is anthropomorphic; or
augment the broadcast or stream of the video, which includes provision of an overlay of graphical avatar that exhibits the at least one aggregate sentiment onto the video, such that the augmented broadcast or stream indicates the at least one aggregate sentiment to the audience members via the graphical avatar; wherein prior to the augmentation, graphical avatar is not part of the video.
Sarkar discloses one or more processor configured to: facilitate augmentation of the broadcast or stream of video to indicate at least one aggregate sentiment in the video performance of one of:
transmit an instruction to a broadcast platform, which is a source of the broadcast or stream of the plurality of audience members, to overlay onto the video a graphical element that exhibits the at least one aggregate sentiment to the audience member via the graphical element, such that the augmented broadcast or stream indicates the at least one aggregate sentiment to the audience members via the graphical element; or
augment the broadcast or stream of the video, which includes provision of an overlay of the graphical element that exhibits the at least one aggregate sentiment onto the video, such that the augmented broadcast or stream indicates the at least one aggregate sentiment to the audience members via the graphical element; wherein prior to augmentation, graphical element is not part of the video (one or more processors configure to: facilitate augmentation/modification of the broadcast or stream of video to indicate at least one aggregate reactions/clips in the video by performing one of: transmitting instruction/reaction to a broadcast platform (broadcaster device 106 and/or social networking system 102, which is a source of the broadcast or stream of the plurality of viewers, to overlay onto the video a graphical element/clips that exhibits the at least one aggregate sentiment/reaction to the viewer via graphical element 432, 428, 430, 418; or augment/modify the broadcast or stream of video comprising overlaying the graphical element 432, 418,430,428 that exhibits the at least one aggregate sentiment/reaction onto the video, such that the modified broadcast or stream indicates that at least one aggregate sentiment/reaction to the viewers via the graphical element; wherein prior to modifying step, graphical element is not part of the live stream video – see include, but are not limited to, figures 2A-2B, 3E, 4E, 5-6, 8, paragraphs 0008-0009, 0025, 0114-0116).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify MV with the teachings as taught by Sarkar in order to yield predictable result of allowing user to interact with other broadcaster and receiving an indication of other viewers level of engagement (paragraphs 0004, 0024).
Sakar does not explicitly disclose the graphical element comprises graphical avatar, wherein the graphical avatar is anthropomorphic.
Additionally and/or alternatively, Imamura discloses a system comprises one or more processors connected to a memory (see for example, figures 3-5), and configured to:
obtain, from a plurality of audience members, sentiment data indicating reactions, during the broadcast or stream, of the plurality of audience members to a video (obtaining, from plurality of users, during broadcast or stream, or while watching live stream event/video, sentiment data (data indicating emotions/reactions) of the users to the video (see include, but are not limited to, figures 3-5, 8, paragraphs 0006-0007, 0009, 0026, 0055 and see discussed in “response to arguments” above);
determine at least one aggregate sentiment of the plurality of plurality of audience members based on the sentiment data (determining at least one aggregate result of the types of emotions that the plurality of users watching the live streaming of the same event/video – see include, but are not limited to, figures 5, 8, 11, paragraphs 0033, 0090-0091, 0096, 0099)
facilitate augmentation of the broadcast or stream of the video to indicate the at least one aggregate sentiment in the video performance of one of:
transmit an instruction to overlay onto the video a graphical avatar that exhibits the at least one aggregate sentiment, such that the augmented broadcast or stream indicates the at least one aggregate sentiment to the audience members via the graphical avatar, wherein the graphical avatar is anthropomorphic (e.g., anthropomorphic is interpreted as avatars may be images looks like a human as described in figures 6-8, paragraph 0061); or
augment the broadcast or stream of the video, which includes provision of an overlay the graphical avatar that exhibits the at least one aggregate sentiment onto the video, such that the augmented broadcast or stream indicates the at least one aggregate sentiment to the audience members via the graphical avatar (transmit an instruction to a broadcast platform comprising distribution apparatus 18, which provides video or video for the plurality of users or overlay onto the video a graphical avatar look like human (e.g., 140 a, b, c or 156, 158 (see for example, figure 8, paragraph 0063) that exhibits the at least one aggregate result/emotion to the users via the graphical avatar (140s, 156, 158) or
augment the broadcast/stream of video comprising overlaying/superimposing the graphical avatar (e.g., graphical avatars 140c, 156, 158) that exhibits that at least one aggregate sentiment/emotion onto the video, such that modified video indicates/shows the at least one aggregate sentiment/emotion to the users via the graphical avatar – see include, but are not limited to, figures 5,7-8, 11, paragraphs 000060-0062, 0071, 0074-0078, 0088, 0096-0097, 0099, 0103);
wherein prior to the augmentation, graphical avatar is not part of the video (prior to the facilitating augmentation/adding step, graphical avatar is not part of the video provided by the camera system 16 or video source – see include, but are not limited to, figures 1, 5, paragraphs 0030-0032, 0054).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify MV with the teachings including augmenting broadcast or stream of video comprising overlaying the graphical avatar that exhibits the at least one aggregate sentiment onto the video, such that the augmented broadcast or stream indicates at least one aggregate sentiment to the audience members via the graphical avatar, wherein prior to the facilitating augmenting step, the graphical avatar is not part of the video as taught by Imamura in order to yield predictable result of allowing user to share or see types of emotions of users (see for example, paragraphs 0009, 0025-0026, 0072).
MV further discloses sentiment data comprises writing style (see para. 0048). However, MV does not explicitly using the term “typed” commentary.
Wang or Roberts or Archibong (hereinafter after referred to as Wang/Roberts/Archibong) discloses sentiment data including typed commentary (see Wang: paragraphs 0003-0004; Roberts: col. 8, lines 45-49; Archibong: paragraph 0104).
Therefore, it would have been obvious to one of ordinary skill in the art to incorporate the term “typed” commentary in MV in order to yield predictable result such as improving watching experience of the viewers (see Wang: paragraphs 0003-0004) or provide user an alternative way to provide commentary via typing.
Regarding claims 2 and 16, MV in view of Sarkar, Imamura and Wang/Roberts/Archibong discloses the method of claim 1, wherein the obtaining sentiment data includes obtaining the typed commentary from a broadcast platform (MV discloses MGHMC retrieves social media and location-specific context information. In various embodiments, MGHMC can monitor and retrieve social media and location-specific content information of one or more audience members attending a media release event from one or more social media platforms (page 8 paragraph (0082) or see Imamura discloses of obtaining comments/message from the broadcast/distribution apparatus - figures 5, 8, paragraphs 0117; Sakar: figures 1-2b, 3E-3F, 4E; Wang: paragraphs 0003-0004; Roberts: col. 8, lines 45-49; Archibong: paragraph 0104).
Regarding claims 3 and 17, MV in view of Sarkar, Imamura and Wang/Roberts/Archibong discloses the method of claim 1 or system of claim 15, wherein the video is provided through the broadcast platform; said obtaining sentiment data includes obtaining, from one or more social networks that are separate from the broadcast platform (see MV discloses MGHMC retrieves social media and location- specific context information. In various embodiments, MGHMC can monitor and retrieve social media and location-specific content information of one or more audience members attending a media release event from one or more social media platforms (page 8 paragraph (0082) or see Imamura’s disclosure in figures 1-5, 8, paragraphs 0030, 0055), a plurality of social media posts associated with the video; and said determining at least one aggregate sentiment includes determining the at least one aggregate sentiment based on content of the social media posts (NV discloses embodiments of the present invention can generate promotional content, based on the identified relevant and/or sensational scenes, by extracting and collating the most relevant and/or sensational scenes from the multimedia content to generate interest in viewers to see the full content (page 2 paragraph (0013) or see Imamura’s disclosures in figures 1-5, 8, 11, paragraphs 0030, 0055, 0090-0091, 0117; Sakar: figures 1-2b, 3E-3F, 4E; Archibong: figures 4, 9-10, paragraphs 0110, 0115, 0155).
Regarding claim 4, MV in view of Sarkar, Imamura and Wang/Roberts/Archibong discloses the method of claim 3. MV teaches said determining the at least one aggregate sentiment based on content of the social media posts (page 2 paragraph 0013) comprises: weighting particular ones of social media posts based on at least one of a quantity of followers of a user that posted the particular social media post, or reactions of other users of the social media network to the particular ones of the social media post (page 6 paragraph (0058)) or see Imamura’s disclosures in figures 1-5, 8, 11, paragraphs 0030, 0055, 0090-0091, 0117; Sakar: figures 3E-3F, 4E, paragraphs 0077-0078, 0111-0115; Archibong: figures 4, 9-11, paragraphs 0115, 0155).
Regarding claim 5, MV in view of Sarkar, Imamura and Wang/Roberts/Archibong discloses the method of claim 3. MV teaches said obtaining the sentiment data from the one or more social networks (MGHMC retrieves social media and location-specific context information. In various embodiments, MGHMC can monitor and retrieve social media and location-specific content information of one or more audience members attending a media release event from one or more social media platforms (page 8 paragraph (0082)) comprises: transmitting a request to the one or more social networks for social media posts associated with the video (page 8 paragraph (0088)), the request including criteria for identifying the social media posts associated with the video (page 6 paragraph (0058)); and receiving the plurality of social media posts based on the request (page 1 paragraph (0003)), or see Imamura’s disclosures in figures 1-5, 8, 11, paragraphs 0030, 0055, 0090-0091, 0117; Sakar: figures 1-2b, 3E-3F, 4E, paragraphs 0077-0078, 0111-0115; Archibong: figures 4, 9-11, paragraphs 0115, 0155).
Regarding claim 7, MV in view of Sarkar, Imamura and Wang/Roberts/Archibong discloses the method of claim 5, comprising: detecting that a predefined event has occurred in the video (for example, MV discloses in various embodiments, if the movement or concentration of audience members for a particular scene is above a predetermined threshold, then MGHMC 312 can identify that particular scene as a relevant scene (page 7 paragraph (0075)); wherein said transmitting a request is performed based on the detecting (MV: page 7 paragraph (0076); Sakar: figures 2a-2b, 3E-3F, 4E, paragraphs 0077-0078, 0111-0115); Imamura: paragraphs 0054, 0067, figures 4-5, 8-9).
Regarding claim 8, MV in view of Sarkar, Imamura and Wang/Roberts/Archibong discloses the method of claim 7, wherein the detecting that a predetermined event has occurred in the video comprises receiving an indication from the broadcast platform that the predefined event has occurred (predetermined event such as time or event in the video indicating that broadcaster provide to indication/permission for user to provide interaction/clip with the video – see include, but are not limited to, Sarkar: figures 2a-2b, 3E-5; Imamura: figures 4-5, 8-9).
Regarding claims 10 and 19, MV in view of Sarkar, Imamura and Wang/Roberts/Archibong discloses the method and system of claims 1 and 15, wherein the determining at least one aggregate sentiment of the plurality of audience members is performed based on a presence of at least one of the following in the typed commentary: ideograms, punctuation, capitalization, hashtags, and non-hashtag keywords (ideograms such as, text, emojis, avatar, etc. – see include, but are not limited to, Imamura: figures 7-9, paragraphs 0075, 0113 ; Sarkar: figures 3D-5, paragraph 0115; Wang: paragraphs 0003-0004; Roberts: col. 8, lines 45-49; Archibong: paragraph 0104).
Regarding claim 14, MV in view of Sarkar, Imamura and Wang/Roberts/Archibong discloses the method of claim 1, wherein the video includes a video depiction of a scene, and wherein:
said augmenting comprises modifying the lighting of the scene based at least on the at least one aggregate sentiment (modifying the lighting of the scene with highlight/change of color based at least one the at least one aggregate sentiment/reactions); or
said transmitting an indication comprises transmitting an instruction to the broadcast platform to modify the lighting of the scene based on the at least one aggregate sentiment (transmitting instruction to the broadcast/distribution device to modify the lighting/highlight of the scene based on the aggregate reactions/sentiment (see include, but are not limited to, Sarkar: figures 2A-2B, 4E-4F, paragraphs 0113-0117; Imamura: figures 7-9).
Regarding claim 26, MV in view of Sarkar, Imamura and Wang/Roberts/Archibong discloses the method of claim 1, wherein said facilitating augmenting comprises performing said transmitting the instruction to the broadcast platform (see include, but are not limited to, Sarkar: figures 1-2b, 4E, 6; Imamura: figures 3-5, 7-9).
Regarding claim 27, MV in view of Sarkar, Imamura and Wang/Roberts/Archibong discloses the method of claim 1, wherein said facilitating augmentation comprises performing said augmenting the broadcast or stream of the video (see include, but are not limited to, Sarkar: figures 1-2b, 4E, 6; Imamura: figures 3-5, 7-9).
Regarding claim 28, MV in view of Sarkar, Imamura and Wang/Roberts/Archibong discloses the audience sentiment detection system of claim 15, wherein the video includes a video depiction of a scene, and wherein to facilitate augmentation of the broadcast or stream of video, the one or more processors are configured (see include, but are not limited to, Imamura: figures 1, 3-5, 8, paragraphs 0054, 0067) to:
modify the lighting of the scene based on the at least one aggregate sentiment; or
transmit, as part of the instruction, an instruction to the broadcast platform to modify the lighting of the scene based on the at least one aggregate sentiment (see include, but are not limited to, Imamura: figures 3, 7-9, 11, paragraphs 0054, 0061, 0066, 0067, 0093; Sakar: figures 3E-3F, 4E, paragraphs 0077-0078, 0111-0115).
Claims 6, 13, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over MV et al (US 2020/0387934 A1) in view of Sarkar, Imamura and Wang/Roberts/Archibong as applied to claim 5 or claim 17 above and further in view of Kim (US 20090019467).
Regarding claim 18, MV in view of Sarkar, Imamura and Wang/Roberts/Archibong discloses the audience sentiment detection system of claim 17, wherein to obtain the sentiment data from the one or more social network, the one or more processors are configured to:
transmit a request to the one or more social network for social posts associated with the video, the request including criteria for identifying the social media posts associated with the performance; wherein the one or more processors are configured to transmit the request according to a predefined schedule, or based on detection of predetermined event in the video (see MV discloses MGHMC retrieves social media and location-specific context information. In various embodiments, MGHMC can monitor and retrieve social media and location-specific content information of one or more audience members attending a media release event from one or more social media platforms, paragraphs 0058, 0082, 0088; Imamura’s disclosures in figures 1-5, 8, 11, paragraphs 0030, 0055, 0090-0091, 0117; Sakar’s disclosure in figures 1-2b, 3E-3F, 4E, paragraphs 0077-0078, 0111-0115).
NV in view of Sarkar and Imamura does not explicitly disclose the request is transmitted on a periodic basis.
Kim discloses transmitting a request on periodic basis according to a predefined schedule, or based on detection of a predefined event in the video (individual reaction data may be periodically collected for a time interval of a predetermined length and put together as individual user reaction data representative of the time interval - see for example, paragraphs 0014-0015).
See also the teaching of transmitting request based on periodic basis according to a predefined schedule or event in Lu (US 20200413134: paragraphs 0019, 0022); Pernot (US 20190026679: paragraph 0029), Hendricks: 20090031335 (paragraphs 0100, 0143, 0303, 0310).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify NV in view of Sarkar, Imamura and Wang/Roberts/Archibong with the teaching of transmitting request based on periodic basis as taught by Kim in order to yield predictable result of allowing user reaction to be collected at predetermined time thereby reducing processing time for sending content (see paragraph 0014) or see Lu (20200413134: paragraph 0019).
Regarding claim 6, the additional limitations that that correspond to the additional limitations of claim 18 are analyzed as discussed in the rejection of claim 18.
Regarding claim 20, MV in view of Sakar, Imamura and Wang/Roberts/Archibong discloses the audience sentiment detecting system of claim 15: wherein to facilitate augmentation of the broadcast or stream of video, the one or more processors are configured to:
add to the video a modification based on the at least one aggregate sentiment; or
transmit an instruction to the broadcast platform to add the modification to the video; and wherein the modification comprises a crowd reaction that is indicative of the at least one aggregate sentiment (see discussion in the rejection of claim 1, 15 and include, but are not limited to, Sakar: figures 2a-2b, 3E, 4E; Imamura: figures 7-9).
However, MV in view of Sarkar and Imamura does not explicitly disclose the crowd reactions/sentiment comprises noise.
Kim discloses modification comprises a crowd noise (modification to comprises adding sounds such as hooting, cheering, and applauding sounds) that is indicative of at least one aggregate sentiment ( see for example, paragraph 0017).
See also Dury (US 20170006322 : paragraphs 0074, 0080, 0117).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify MV in view of Sarkar and Imamura with the teaching including crowd noise as taught by Kim in order to yield predictable result of allowing user to hear to hear the sentiment /sound (paragraph 0017).
Regarding claim 13, the additional limitations that correspond to the additional limitations of claim 20 are analyzed as discussed in the rejection of claim 20. Particularly, MV in view of Sarkar, Imamura and Wang/Roberts/Archibong and Kim discloses the method of claim 1, wherein the video depicts a performance and wherein:
the augmenting comprises adding crowd noise to the video that is indicative of the at least one aggregate sentiment, or
the transmitting an indication comprises transmitting an instruction to the broadcast platform to add the crowd noise to the video (see similar discussion in the rejection of claim 20).
Claims 9 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over MV et al (US 2020/0387934 A1) in view of Sarkar, Imamura and Wang/Roberts/Archibong as applied to claim 1 or claim 7 and further in view of Lee (US 2020/0359108 A1).
Regarding claim 9, MV in view of Sarkar, Imamura and Wang/Roberts/Archibong discloses the method of claim 7, wherein said detecting that the predefined event has occurred in the video comprises utilizing a network to analyze the video and detect the predetermined event in the video (see discussion in the rejection of claim 7 and Sakar: figures 2a-2b, 3E-3F, 4E, paragraphs 0077-0078, 0111-0115); Imamura: paragraphs 0054, 0067, figures 4-5, 8-9).
MV in view of Sarkar and Imamura does not explicitly disclose the network comprises neural network.
Lee discloses detecting using a neural network teaches on (See paragraph 0092 - the reaction server processes the environment data a local reaction data through a neural network which predicts the remote feedback. The predicted remote feedback along with the local feedback is then provided in the performance area. Thus, the neural network predicts remote reactions based on environment state data and local reaction data).
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify MV network to comprise neural network as taught by Lee in order to yield predictable result of improving convenience for user or reducing user time for analyzing event.
Regarding claim 11, MV in view of Sarkar, Imamura and Wang/Roberts/Archibong discloses the method of claim 1, wherein said determining at least one aggregate sentiment of the plurality of audience members comprises:
Obtaining images of the audience members physically reacting to the video (obtaining video clip of user); and utilizing a network to determine sentiment s of the audience members based on the images (see include, but are not limited to, Sarkar: figures 2A-2B, 3E, 4B-5).
MV in view of Sarkar and Imamura does not explicitly disclose the network comprises neural network.
Lee discloses detecting using a neural network teaches on (See paragraph 0092 - the reaction server processes the environment data a local reaction data through a neural network which predicts the remote feedback. The predicted remote feedback along with the local feedback is then provided in the performance area. Thus, the neural network predicts remote reactions based on environment state data and local reaction data).
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify MV network to comprise neural network as taught by Lee in order to yield predictable result of improving convenience for user or reducing user time for analyzing event.
Claims 21 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over MV et al (US 2020/0387934 A1) in view of Sarkar , Imamura and Wang/Roberts/Archibong as applied to claim 15 above and further in view of either Bovenschulte et al. (US 20070136753) or Doe (US 20200177954).
Note: all documents that are directly or indirectly incorporated by reference in their entireties in Bovenschulte (see para. 0035) are treated as part of the specification of Bovenschulte (see MPEP 2163.07 b).
Regarding claim 21, MV in view of Sarkar, Imamura and Wang/Roberts/Archibong discloses the method of claim 1, wherein: said determining at least one aggregate sentiment of the plurality of audience members based on the sentiment data comprises:
determining a first aggregate sentiment for a first subset of the audience members corresponding to a location (location associated with one or more users); and
determining a second aggregate sentiment for a second subset of the audience members corresponding to a second location (location associated with another users);
wherein the first aggregate sentiment is different from the second aggregate sentiment, the first location is different from the second area, and the first subset of the audience members is different from the second subset of audience members; and
wherein said facilitating augmentation of the broadcast or stream of video to indicate the at least one aggregate sentiment comprises:
facilitating augmentation of the broadcast or stream of video to the first subset of users to indicate the first aggregate sentiment; and
facilitating augmentation of the broadcast or stream of video to the second subset of users to indicate the second aggregate sentiment (see include, but are not limited to, Sarkar: figures 2a-2b, 3e, 4a-5, 10; Imamura: figures 1, 8-12). However, MV in view of Sarkar and Imamura does not explicitly disclose first and second location are first geographic location and second geographic location.
Bovenschulte or Doe (Bovenschulte/Doe) discloses determining aggregate sentiment/rating of audiences based of subset of users/audiences corresponding to first geographic location and second geographic location, wherein the first geographic location is different from the second geographic location and facilitating augmentation of the broadcast or stream of video to the first subset of users to indicate first aggregate sending and to the second subset of users to indicate the second aggregate sentiment (see include, but are not limited to, Bovenschulte: figures 7, 10-14, paragraphs 0016, 0056, 0096, 0102, 0122; Doe: figures 1, 3, 6, abstract, claim 1).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify MV in view of Sarkar and Imamura with the teaching comprising first geographic location and second geographic location as taught by Bovenschulte/Doe in order to yield predictable result such as to target or share user rating to users at particular geographic location (see for example, Bovenschulte: paragraph 0017).
Regarding claim 23, the additional limitations that correspond to the additional limitations of claim 21 are analyzed as discussed in the rejection of claim 21.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Horowitz et al. (US 20090164904) discloses writer commentary in live on segments of live streaming event (paragraphs 0017, 0019).
Wood et al. (US 20190069047) discloses methods and systems for sharing live stream media content and inserting written comments, emojis and/or other commentary in the selected timestamped live stream media content (see paragraphs 0117, 0159).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AN SON PHI HUYNH whose telephone number is (571)272-7295. The examiner can normally be reached 9:00 am-6:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NASSER M. GOODARZI can be reached at 571-272-4195. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AN SON P HUYNH/Primary Examiner, Art Unit 2426
February 27, 2026