DETAILED ACTION
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1, 8-18 recite the limitation “in a case that…”. Examiner notes that the language recited in this limitation, specifically the word "in a case," is interpreted as conditional/optional claim language. Language that suggests or makes optional but does not require steps to be performed or does not limit the claim to a particular structure or does not limit the scope of a claim or claim limitation. Therefore, the language following the "in a case" is optional and is not given any patentable weight.
Claim 6 recites the limitation "the interactive special effect" in line 6. It is unclear that which “the interactive special effect" the limitation is referring to, for example, "the interactive special effect" in line 5 of claim 6 or of other claims 1-5.
Claim 7 recites the limitation “the expression special effect” in line 6. It is unclear that which “the expression special effect” the limitation is referring to, for example, “the expression special effect” in line 5 of claim 7 or of claim 5.
Claim 13 recites the limitations “the anchor object”. There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, 8-13, 16 and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dong et al. (“Dong”, Pub. No. 2024/0028189) and TAKEDA et al. (“Takeda”, Pub. No. US 2018/0300037).
Per claim 1, Dong teaches a method for live streaming interaction, performed by a terminal of a first anchor object, comprising:
displaying, in a live-streaming page, the first anchor object in a first live-streaming window and a second anchor object in a second live-streaming window (Fig. 2 shows a live room page/live video page having a plurality of live streaming windows 201-204; [0058]…in some application scenarios (for example, a scenario for communicating with at least one anchor by co-hosting), the co-hosting guest may be an anchor in other live room. For another example, in some application scenarios (for example, a scenario in which one anchor communicates with a user by co-hosting), the co-hosting guest may be the user (rather than the anchor). It can be seen that, a co-hosting guest in a first live room may be an anchor in a second live room, or a user who is watching the first live room. The second live room is different from the first live room.);
determining, in a case that the first anchor object in the first live-streaming window makes a preset action (figs. 3-4; [0081]… the live video page is the page 200 shown in FIG. 2, and the “at least one candidate user display interface” includes the interface 201, the interface 202, the interface 203 and the interface 204 shown in FIG. 2. In this case, if the viewer of the live video page triggers a preset selection operation (e.g., a click operation) on the interface 202, the interface 202 may be determined as the target user display interface, such that the viewer may trigger some interaction operations (e.g., sending gifts) on the target user display interface. [0084]… The virtual gifts deployed on the interaction interface may include Gift 1, Gift 2 and the like shown in FIG. 3, such that the anchor may select one or more of the gifts and send the selected gifts to a co-hosting guest corresponding to the interface 309. It should be noted that a page 300 shown in FIG. 3 refers to a live video page displayed on an anchor end of the live streaming. [0085]… a viewer of the live video page is a live viewer (for example, a co-hosting guest or an audience), that is, the interaction method according to the embodiment of the present disclosure is applied to a guest end corresponding to the co-hosting guest or an audience end of the audience. In this case, if the target user display interface is an interface 405 shown in FIG. 4, the interaction interface may be a page 407 shown in FIG. 4.)
Dong does not specifically teach an interactive special effect associated with the preset action, wherein the interactive special effect comprises an interactive element; and displaying the interactive special effect, the interactive special effect being that the interactive element moves from the first live-streaming window to the second live-streaming window, and the interactive special effect being configured to present an interactive effect of the first anchor object in the first live-streaming window and the second anchor object in the second live-streaming window.
However, Takeda teaches an interactive special effect associated with the preset action, wherein the interactive special effect comprises an interactive element (fig. 3 and 4; [0058]… Referring to FIG. 3, it is assumed that a user who manipulates the user terminal 30 has selected a button Btnl representing a heart-shaped sticker included in the sticker selection screen 1012 with the manipulating body H (for example, a user's finger). At this time, highlighting processing may be performed on a region defining the button Btnl in order to display that the button Btnl has been selected); and displaying the interactive special effect, the interactive special effect being that the interactive element moves from the first live-streaming window to the second live-streaming window, and the interactive special effect being configured to present an interactive effect of the first anchor object in the first live-streaming window and the second anchor object in the second live-streaming window ([0059]… FIG. 4 is a diagram for describing a manipulation of locating a sticker on live video using the UI displayed on the display manipulation unit 210 of the user terminal 30 (20). Referring to FIG. 4, after the button Btnl is selected, the user selects any position (location position) on the live video display screen 1010 with the manipulating body H. At this time, the user terminal 30 transmits manipulation information (for example, the type, location position, or the like of a sticker Stk1) with the manipulating body H to the server 10. The server 10 having acquired the manipulation information performs image processing based on the manipulation information on live video. Then, the server 10 distributes the live video after the image processing to a plurality of user terminals including the user terminal 30. Accordingly, the live video M1 (for example, live video on which the sticker Stk1 has been superimposed) after the image processing is displayed on the live video display screen 1010. [0139]…In the example shown in FIG. 14, it is assumed that a user has located a sticker Stk5 on the live video display screen 1010 using the manipulating body H, and has made a sliding manipulation. The manipulation information generation unit 203 generates information concerning the location position of the sticker Stk5 and the sliding manipulation, and the manipulation information transmission unit 204 transmits the manipulation information to the server 10. Accordingly, it is noted that Takeda allows moving of sticker from one interface/window to another interface/window displayed on the user terminal 30 (20) using manipulation body H). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the teaching of Takeda in the invention of Dong to allow direct manipulation of virtual objects and sharing of the virtual objects because doing so would enhance user’s experience by allowing engagements between users while viewing live streaming content.
Per claim 2, the modfied Dong teaches the method for live streaming interaction according to claim 1, wherein said determining, in the case that the first anchor object in the first live-streaming window makes the preset action, the interactive special effect associated with the preset action comprises: acquiring, in the case that the first anchor object in the first live-streaming window makes the preset action, an action content and an action direction by recognizing the preset action and determining the interactive special effect based on the action content and at least one of the action direction or a preset special effect (Takeda, fig. 14; [0059]… FIG. 4 is a diagram for describing a manipulation of locating a sticker on live video using the UI displayed on the display manipulation unit 210 of the user terminal 30 (20). Referring to FIG. 4, after the button Btnl is selected, the user selects any position (location position) on the live video display screen 1010 with the manipulating body H. At this time, the user terminal 30 transmits manipulation information (for example, the type, location position, or the like of a sticker Stk1) with the manipulating body H to the server 10. The server 10 having acquired the manipulation information performs image processing based on the manipulation information on live video. Then, the server 10 distributes the live video after the image processing to a plurality of user terminals including the user terminal 30. Accordingly, the live video M1 (for example, live video on which the sticker Stk1 has been superimposed) after the image processing is displayed on the live video display screen 1010. [0140]…When the manipulation information acquisition unit 102 acquires the above-described manipulation information, the image processing unit 104 performs image processing based on the location position of the sticker Stk5, the sliding manipulation, and the like. For example, the image processing unit 104 may move the sticker Stk5 in a direction corresponding to the sliding manipulation of the manipulating body H at a predetermined speed using the location position of the sticker Stk5 as an initial position. Accordingly, the sticker Stk5 shows such a behavior as to slide over the live video M5. Note that the above-described predetermined speed may be determined in accordance with a sliding speed of the manipulating body H in the above-described sliding manipulation, for example). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the teaching of Takeda in the invention of Dong to allow direct manipulation of virtual objects and sharing of the virtual objects because doing so would enhance user’s experience by allowing engagements between users while viewing live streaming content.
Per claim 3, the modified Dong teaches the method for live streaming interaction according to claim 2, wherein the action direction is to indicate the second live-streaming window; and said determining the interactive special effect based on the action content and at least one of the action direction or the preset special effect comprises: determining the interactive special effect based on the action direction, wherein the interactive special effect is that the interactive element moves from the first live-streaming window to the second live-streaming window indicated by the action direction (Takeda, fig. 14; [0059]… FIG. 4 is a diagram for describing a manipulation of locating a sticker on live video using the UI displayed on the display manipulation unit 210 of the user terminal 30 (20). Referring to FIG. 4, after the button Btnl is selected, the user selects any position (location position) on the live video display screen 1010 with the manipulating body H. At this time, the user terminal 30 transmits manipulation information (for example, the type, location position, or the like of a sticker Stk1) with the manipulating body H to the server 10. The server 10 having acquired the manipulation information performs image processing based on the manipulation information on live video. Then, the server 10 distributes the live video after the image processing to a plurality of user terminals including the user terminal 30. Accordingly, the live video M1 (for example, live video on which the sticker Stk1 has been superimposed) after the image processing is displayed on the live video display screen 1010. [0140]…When the manipulation information acquisition unit 102 acquires the above-described manipulation information, the image processing unit 104 performs image processing based on the location position of the sticker Stk5, the sliding manipulation, and the like. For example, the image processing unit 104 may move the sticker Stk5 in a direction corresponding to the sliding manipulation of the manipulating body H at a predetermined speed using the location position of the sticker Stk5 as an initial position. Accordingly, the sticker Stk5 shows such a behavior as to slide over the live video M5. Note that the above-described predetermined speed may be determined in accordance with a sliding speed of the manipulating body H in the above-described sliding manipulation, for example). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the teaching of Takeda in the invention of Dong to allow direct manipulation of virtual objects and sharing of the virtual objects because doing so would enhance user’s experience by allowing engagements between users while viewing live streaming content.
Per claim 4, the modified Dong teaches the method for live streaming interaction according to claim 2, wherein the action content is a body action of the first anchor object, the body action comprising at least one of a facial expression, a head action, a limb action, or a torso action, and the preset special effect is a body special effect; and said determining the interactive special effect based on the action content and at least one of the action direction or the preset special effect comprises: determining the interactive special effect based on the body action of the first anchor object and the body special effect (Takeda, fig. 14; [0059]… FIG. 4 is a diagram for describing a manipulation of locating a sticker on live video using the UI displayed on the display manipulation unit 210 of the user terminal 30 (20). Referring to FIG. 4, after the button Btnl is selected, the user selects any position (location position) on the live video display screen 1010 with the manipulating body H. At this time, the user terminal 30 transmits manipulation information (for example, the type, location position, or the like of a sticker Stk1) with the manipulating body H to the server 10. The server 10 having acquired the manipulation information performs image processing based on the manipulation information on live video. Then, the server 10 distributes the live video after the image processing to a plurality of user terminals including the user terminal 30. Accordingly, the live video M1 (for example, live video on which the sticker Stk1 has been superimposed) after the image processing is displayed on the live video display screen 1010. [0140]…When the manipulation information acquisition unit 102 acquires the above-described manipulation information, the image processing unit 104 performs image processing based on the location position of the sticker Stk5, the sliding manipulation, and the like. For example, the image processing unit 104 may move the sticker Stk5 in a direction corresponding to the sliding manipulation of the manipulating body H at a predetermined speed using the location position of the sticker Stk5 as an initial position. Accordingly, the sticker Stk5 shows such a behavior as to slide over the live video M5. Note that the above-described predetermined speed may be determined in accordance with a sliding speed of the manipulating body H in the above-described sliding manipulation, for example). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the teaching of Takeda in the invention of Dong to allow direct manipulation of virtual objects and sharing of the virtual objects because doing so would enhance user’s experience by allowing engagements between users while viewing live streaming content.
Per claim 8, the modified Dong teaches the method for live streaming interaction according to claim 2, wherein the action content is a body action of the first anchor object, the body action comprising at least one of a facial expression, a head action, a limb action, or a torso action, and the preset special effect is a body special effect (Takeda, fig. 14; [0059]… FIG. 4 is a diagram for describing a manipulation of locating a sticker on live video using the UI displayed on the display manipulation unit 210 of the user terminal 30 (20). Referring to FIG. 4, after the button Btnl is selected, the user selects any position (location position) on the live video display screen 1010 with the manipulating body H. At this time, the user terminal 30 transmits manipulation information (for example, the type, location position, or the like of a sticker Stk1) with the manipulating body H to the server 10. The server 10 having acquired the manipulation information performs image processing based on the manipulation information on live video. Then, the server 10 distributes the live video after the image processing to a plurality of user terminals including the user terminal 30. Accordingly, the live video M1 (for example, live video on which the sticker Stk1 has been superimposed) after the image processing is displayed on the live video display screen 1010. [0140]…When the manipulation information acquisition unit 102 acquires the above-described manipulation information, the image processing unit 104 performs image processing based on the location position of the sticker Stk5, the sliding manipulation, and the like. For example, the image processing unit 104 may move the sticker Stk5 in a direction corresponding to the sliding manipulation of the manipulating body H at a predetermined speed using the location position of the sticker Stk5 as an initial position. Accordingly, the sticker Stk5 shows such a behavior as to slide over the live video M5. Note that the above-described predetermined speed may be determined in accordance with a sliding speed of the manipulating body H in the above-described sliding manipulation, for example), wherein the interactive element is an interactive prop, the body special effect is a movement process of an interactive prop (fig. 2; interactive prop 1012; fig. 9, [0115]…In the example shown in FIG. 9, it is assumed that a viewing user has located a sticker Stk2 representing a bomb on the live video M2 using the manipulating body H. This sticker Stk2 representing a bomb has characteristics of performing deformation processing on a moving image for a region corresponding to the location position. The manipulation information generation unit 203 generates manipulation information including the location position, type, and mode of the sticker Stk2, and the manipulation information transmission unit 204 transmits the manipulation information to the server 10. Fig. 11, [0125]… the stickers Stk3 have been located in the region 1031, the region 1032, and on the outside of these regions, respectively. The manipulation information generation unit 203 generates manipulation information including each location position, type, and mode of the stickers Stk3, and the manipulation information transmission unit 204 transmits the manipulation information to the server 10. [0127]…in the example shown in FIG. 11, in a case where the location position is included in the region 1031, the image processing unit 104 may change the sticker Stk3 representing a favor to a sticker Stk31 representing a cat paw, and may superimpose the sticker Stk31 on the live video M3. In addition, in a case where the location position is included in the region 1032, the image processing unit 104 may change the sticker Stk3 to a sticker Stk32 representing text of “CUTE!”, and may superimpose the sticker Stk32 on the live video M3. Accordingly, it is possible to enjoy various displays from a single sticker in accordance with characteristics of an object included in a moving image.); and
said determining the interactive special effect based on the action content and at least one of the action direction or the preset special effect comprises: determining, in a case that the interactive prop touches the first live-streaming window, a moving track of the interactive prop based on an action direction of the body action (fig. 14, moving track Efc; fig. 14; [0059]… FIG. 4 is a diagram for describing a manipulation of locating a sticker on live video using the UI displayed on the display manipulation unit 210 of the user terminal 30 (20). Referring to FIG. 4, after the button Btnl is selected, the user selects any position (location position) on the live video display screen 1010 with the manipulating body H. At this time, the user terminal 30 transmits manipulation information (for example, the type, location position, or the like of a sticker Stk1) with the manipulating body H to the server 10. The server 10 having acquired the manipulation information performs image processing based on the manipulation information on live video. Then, the server 10 distributes the live video after the image processing to a plurality of user terminals including the user terminal 30. Accordingly, the live video M1 (for example, live video on which the sticker Stk1 has been superimposed) after the image processing is displayed on the live video display screen 1010. [0140]…When the manipulation information acquisition unit 102 acquires the above-described manipulation information, the image processing unit 104 performs image processing based on the location position of the sticker Stk5, the sliding manipulation, and the like. For example, the image processing unit 104 may move the sticker Stk5 in a direction corresponding to the sliding manipulation of the manipulating body H at a predetermined speed using the location position of the sticker Stk5 as an initial position. Accordingly, the sticker Stk5 shows such a behavior as to slide over the live video M5. Note that the above-described predetermined speed may be determined in accordance with a sliding speed of the manipulating body H in the above-described sliding manipulation.) and
determining the interactive special effect based on the moving track, the interactive special effect being that the interactive prop starts from the first live-streaming window and moves to a target position of the second live-streaming window along the moving track (fig. 14, moving track Efc; fig. 14; [0059]… FIG. 4 is a diagram for describing a manipulation of locating a sticker on live video using the UI displayed on the display manipulation unit 210 of the user terminal 30 (20). Referring to FIG. 4, after the button Btnl is selected, the user selects any position (location position) on the live video display screen 1010 with the manipulating body H. At this time, the user terminal 30 transmits manipulation information (for example, the type, location position, or the like of a sticker Stk1) with the manipulating body H to the server 10. The server 10 having acquired the manipulation information performs image processing based on the manipulation information on live video. Then, the server 10 distributes the live video after the image processing to a plurality of user terminals including the user terminal 30. Accordingly, the live video M1 (for example, live video on which the sticker Stk1 has been superimposed) after the image processing is displayed on the live video display screen 1010. [0140]…When the manipulation information acquisition unit 102 acquires the above-described manipulation information, the image processing unit 104 performs image processing based on the location position of the sticker Stk5, the sliding manipulation, and the like. For example, the image processing unit 104 may move the sticker Stk5 in a direction corresponding to the sliding manipulation of the manipulating body H at a predetermined speed using the location position of the sticker Stk5 as an initial position. Accordingly, the sticker Stk5 shows such a behavior as to slide over the live video M5. Note that the above-described predetermined speed may be determined in accordance with a sliding speed of the manipulating body H in the above-described sliding manipulation.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the teaching of Takeda in the invention of Dong to allow direct manipulation of virtual objects and sharing of the virtual objects because doing so would enhance user’s experience by allowing engagements between users while viewing live streaming content.
Per claim 9, the modified Dong teaches the method for live streaming interaction according to claim 8, wherein the target position of the second live-streaming window is a position where a boundary of the second live-streaming window is located, and the method further comprises at least one of: updating and displaying a score of the first anchor object in a case that the interactive prop touches the boundary of the second live-streaming window; and updating and displaying, in a case that the interactive prop does not touch the boundary of the second live-streaming window, an interaction state of the second anchor object in the second live-streaming window from a first state to a second state, the first state indicating that the second anchor object is in an interaction state, and the second state indicating that the second anchor object is in an interaction quit state (fig. 20, [0165]…The image analysis unit 103 recognizes the first human object Obj80 and the second human object Obj81 included in the live video M8, and the image processing unit 104 determines whether or not the tapped position included in manipulation information is included in a region corresponding to either the region 1080 corresponding to the human object Obj80 or the region 1081 corresponding to the second human object Obj81. Then, the image processing unit 104 specifies a corresponding object (or specifies that neither applies), and counts the number of tapping for the object. Processing as described above is carried out repeatedly for a predetermined time. Note that the predetermined time may be determined by the distribution user who distributes live video using the user terminal 20, or may be a time defined in advance or the like. [0166]…that a gauge 1084 indicating the degree that corresponds to the color of each heat map may be displayed on the live video M8. By displaying the counted result using the heat maps in this manner, users can understand the counted result intuitively.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the teaching of Takeda in the invention of Dong to allow direct manipulation of virtual objects because doing so would enhance user’s experience by allowing engagements between users while viewing live streaming content.
Per claim 10, the modified Dong teaches the method for live streaming interaction according to claim 8, wherein the target position of the second live-streaming window is a position where a virtual target ring corresponding to the second live-streaming window is located (fig. 20, virtual ring Obj30 and Obj81); and the method further comprises: displaying at least one virtual target ring of the second live-streaming window, each virtual target ring corresponding to a score; and updating and displaying, in a case that the interactive prop touches the virtual target ring, a score of the first anchor object based on a score of the virtual target ring ([0165]…The image analysis unit 103 recognizes the first human object Obj80 and the second human object Obj81 included in the live video M8, and the image processing unit 104 determines whether or not the tapped position included in manipulation information is included in a region corresponding to either the region 1080 corresponding to the human object Obj80 or the region 1081 corresponding to the second human object Obj81. Then, the image processing unit 104 specifies a corresponding object (or specifies that neither applies), and counts the number of tapping for the object. Processing as described above is carried out repeatedly for a predetermined time. Note that the predetermined time may be determined by the distribution user who distributes live video using the user terminal 20, or may be a time defined in advance or the like. [0166]…that a gauge 1084 indicating the degree that corresponds to the color of each heat map may be displayed on the live video M8. By displaying the counted result using the heat maps in this manner, users can understand the counted result intuitively.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the teaching of Takeda in the invention of Dong to allow direct manipulation of virtual objects because doing so would enhance user’s experience by allowing engagements between users while viewing live streaming content.
Per claim 11, the modified Dong teaches the method for live streaming interaction according to claim 8, further comprising: detecting a boundary of the first live-streaming window based on a position of the first live-streaming window; and determining, in a case that a boundary of the interactive prop overlaps the boundary of the first live-streaming window, that the interactive prop touches the first live-streaming window (fig. 14, moving track Efc; fig. 14; [0059]… FIG. 4 is a diagram for describing a manipulation of locating a sticker on live video using the UI displayed on the display manipulation unit 210 of the user terminal 30 (20). Referring to FIG. 4, after the button Btnl is selected, the user selects any position (location position) on the live video display screen 1010 with the manipulating body H. At this time, the user terminal 30 transmits manipulation information (for example, the type, location position, or the like of a sticker Stk1) with the manipulating body H to the server 10. The server 10 having acquired the manipulation information performs image processing based on the manipulation information on live video. Then, the server 10 distributes the live video after the image processing to a plurality of user terminals including the user terminal 30. Accordingly, the live video M1 (for example, live video on which the sticker Stk1 has been superimposed) after the image processing is displayed on the live video display screen 1010. [0140]…When the manipulation information acquisition unit 102 acquires the above-described manipulation information, the image processing unit 104 performs image processing based on the location position of the sticker Stk5, the sliding manipulation, and the like. For example, the image processing unit 104 may move the sticker Stk5 in a direction corresponding to the sliding manipulation of the manipulating body H at a predetermined speed using the location position of the sticker Stk5 as an initial position. Accordingly, the sticker Stk5 shows such a behavior as to slide over the live video M5. Note that the above-described predetermined speed may be determined in accordance with a sliding speed of the manipulating body H in the above-described sliding manipulation.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the teaching of Takeda in the invention of Dong to allow direct manipulation of virtual objects because doing so would enhance user’s experience by allowing engagements between users while viewing live streaming content.
Per claim 12, the modified Dong teaches the method for live streaming interaction according to claim 8, further comprising: determining, in a case that the interactive prop touches the first live-streaming window, a moving speed of the interactive prop based on an action parameter of the body action and an elastic parameter of the interactive prop (fig. 14, moving track Efc; fig. 14; [0059]… FIG. 4 is a diagram for describing a manipulation of locating a sticker on live video using the UI displayed on the display manipulation unit 210 of the user terminal 30 (20). Referring to FIG. 4, after the button Btnl is selected, the user selects any position (location position) on the live video display screen 1010 with the manipulating body H. At this time, the user terminal 30 transmits manipulation information (for example, the type, location position, or the like of a sticker Stk1) with the manipulating body H to the server 10. The server 10 having acquired the manipulation information performs image processing based on the manipulation information on live video. Then, the server 10 distributes the live video after the image processing to a plurality of user terminals including the user terminal 30. Accordingly, the live video M1 (for example, live video on which the sticker Stk1 has been superimposed) after the image processing is displayed on the live video display screen 1010. [0140]…When the manipulation information acquisition unit 102 acquires the above-described manipulation information, the image processing unit 104 performs image processing based on the location position of the sticker Stk5, the sliding manipulation, and the like. For example, the image processing unit 104 may move the sticker Stk5 in a direction corresponding to the sliding manipulation of the manipulating body H at a predetermined speed using the location position of the sticker Stk5 as an initial position. Accordingly, the sticker Stk5 shows such a behavior as to slide over the live video M5. Note that the above-described predetermined speed may be determined in accordance with a sliding speed of the manipulating body H in the above-described sliding manipulation. ) and determining, based on a boundary of the interactive prop and a boundary of the first live-streaming window, a collision position of the interactive prop with the first live-streaming window and said determining the interactive special effect based on the moving track comprises: determining the interactive special effect based on the moving track, the moving speed, and the collision position, the interactive special effect being that the interactive prop starts from the collision position and moves to the target position of the second live-streaming window at the moving speed along the moving track (fig. 15; [0143]…the image processing unit 104 may perform processing of displaying such a presentation that the second sticker Stk52 having been hit is flicked to the outside of the live video M5. In the example shown in FIG. 15, when the second sticker Stk52 is hit by the first sticker Stk51, the second sticker Stk52 is flicked to the outside of the live video display screen 1010. In this manner, when the image processing unit 104 performs predetermined processing when a plurality of stickers hit against each other through sliding processing, viewing users not only view the live video, but also can enjoy the live video like a game.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the teaching of Takeda in the invention of Dong to allow direct manipulation of virtual objects because doing so would enhance user’s experience by allowing engagements between users while viewing live streaming content.
Per claim 13, the modified Dong teaches the method for live streaming interaction according to claim 1, wherein the interactive special effect comprises a first interactive element and a second interactive element, the first interactive element being configured to represent interaction progress, and the second interactive element being configured to present an interaction mode between anchor objects ([0143]…in FIG. 15, it is assumed that a user has located a first sticker Stk51 on the live video display screen 1010 using the manipulating body H and has performed a sliding manipulation such that the first sticker Stk51 is slid toward the position at which the second sticker Stk52 is located); and
said displaying the interactive special effect comprises: displaying, for any anchor object, in a case that the first interactive element is in a first form, a movement of the second interactive element from a live-streaming window of the anchor object to a live-streaming window indicated by an action direction of a preset action of the anchor object based on the action direction, the first form being configured to represent that the first interactive element is currently in an interaction state (fig. 15, left hand side figure show first form of interaction. [0143]…when it is determined that the first sticker Stk51 subjected to the sliding processing has made contact with the second sticker Stk52 on the live video M5, the image processing unit 104 may perform predetermined processing. For example, the image processing unit 104 may perform processing of displaying such a presentation that the second sticker Stk52 having been hit is flicked to the outside of the live video M5. In the example shown in FIG. 15, when the second sticker Stk52 is hit by the first sticker Stk51, the second sticker Stk52 is flicked to the outside of the live video display screen 1010. In this manner, when the image processing unit 104 performs predetermined processing when a plurality of stickers hit against each other through sliding processing, viewing users not only view the live video, but also can enjoy the live video like a game. [0144]…when a plurality of stickers hit against each other through sliding processing is not particularly limited. For example, the image processing unit 104 may perform such processing that, when a specific sticker hits another sticker, the specific sticker is changed to a still different sticker. In this manner, by changing processing to be performed when a hit occurs in accordance with the type of a sticker, it is possible to make viewing of live video by viewing users more diverse. Further, the mode related to a change in position of the sticker by sticker sliding over the live video is not particularly limited. For example, in accordance with the mode of sliding of the manipulating body H by sliding processing, the magnitude or direction of an amount of change in position of the sticker may be determined as appropriate.);
displaying, in a case that an interaction duration reaches a preset duration, that the first interactive element is converted from the first form to a second form, the second form being configured to represent that the first interactive element is currently in an interaction quit state; and using, in a case that the first interactive element is in the second form, an anchor object in a live-streaming window where the second interactive element is currently located as a target object ( fig. 15, left hand side figure show first form of interaction. [0143]…when it is determined that the first sticker Stk51 subjected to the sliding processing has made contact with the second sticker Stk52 on the live video M5, the image processing unit 104 may perform predetermined processing. For example, the image processing unit 104 may perform processing of displaying such a presentation that the second sticker Stk52 having been hit is flicked to the outside of the live video M5. In the example shown in FIG. 15, when the second sticker Stk52 is hit by the first sticker Stk51, the second sticker Stk52 is flicked to the outside of the live video display screen 1010. In this manner, when the image processing unit 104 performs predetermined processing when a plurality of stickers hit against each other through sliding processing, viewing users not only view the live video, but also can enjoy the live video like a game. [0144]…when a plurality of stickers hit against each other through sliding processing is not particularly limited. For example, the image processing unit 104 may perform such processing that, when a specific sticker hits another sticker, the specific sticker is changed to a still different sticker. In this manner, by changing processing to be performed when a hit occurs in accordance with the type of a sticker, it is possible to make viewing of live video by viewing users more diverse. Further, the mode related to a change in position of the sticker by sticker sliding over the live video is not particularly limited. For example, in accordance with the mode of sliding of the manipulating body H by sliding processing, the magnitude or direction of an amount of change in position of the sticker may be determined as appropriate.)
Claim 16 is rejected under the same rationale as claim 1, 2 and 4.
Claims 18-19 are rejected under the same rationale as claims 1 and 2 respectively.
Claims 20 is rejected under the same rationale as claims 3 and 4.
Claim(s) 5 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dong et al. (“Dong”, Pub. No. 2024/0028189), TAKEDA et al. (“Takeda”, Pub. No. US 2018/0300037), and el Kaliouby et al. (“Kaliouby”, Pub. No. 2017/0098122).
Per claim 5, the modified Dong teaches the method for live streaming interaction according to claim 4, but does not teach wherein the body action is a facial expression of the first anchor object, and the body special effect is an expression special effect and said determining the interactive special effect based on the body action of the first anchor object and the body special effect comprises: determining the interactive special effect based on the facial expression of the first anchor object and the expression special effect.
However, Kaliouby teaches the body action is a facial expression of the first anchor object, and the body special effect is an expression special effect and said determining the interactive special effect based on the body action of the first anchor object and the body special effect comprises: determining the interactive special effect based on the facial expression of the first anchor object and the expression special effect ([0040]…The facial landmarks that are detected during the performing of the facial landmark detection can be translated into a representative icon. Fig. 5, [0065]…any number of AUs and/or facial muscle movements can correspond to an emoji. One or more emoji can be selected to represent a given facial expression, for example. [0084]… FIG. 13 shows live streaming of social video in a social media context. The live streaming can be used within a deep learning environment. Analysis of live streaming of social video can be performed using data collected from evaluating images to determine a facial expression and/or mental state. A plurality of images of an individual viewing an electronic display can be received. A face can be identified in an image, based on the use of classifiers. The plurality of images can be evaluated to determine facial expressions and/or mental states of the individual.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the teaching of Kaliouby in the invention of the modified Dong to include facial expressions of the users to identify one or more virtual objects/emojis represent a given facial expression because doing so would enhance user’s experience by allowing engagements between users while viewing live streaming content.
Per claim 17, the modified Dong teaches the method for live streaming interaction according to claim 16, comprising wherein recognizing the preset action; and acquiring an action content and an action direction comprising: capturing the body action of the first anchor object and in a case that the body action is a limb action or a torso action, recognizing a key point in a limb or a torso of the first anchor object through a human body recognition system to determine an action track of the key point in the limb or the torso of the first anchor object to acquire the limb action or the torso action of the first anchor object and the action direction indicated by the limb action or the torso action (Takeda, fig. 14 shows action track of key point Efc; [0140]…When the manipulation information acquisition unit 102 acquires the above-described manipulation information, the image processing unit 104 performs image processing based on the location position of the sticker Stk5, the sliding manipulation, and the like. For example, the image processing unit 104 may move the sticker Stk5 in a direction corresponding to the sliding manipulation of the manipulating body H at a predetermined speed using the location position of the sticker Stk5 as an initial position. Accordingly, the sticker Stk5 shows such a behavior as to slide over the live video M5. Note that the above-described predetermined speed may be determined in accordance with a sliding speed of the manipulating body H in the above-described sliding manipulation, for example). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the teaching of Takeda in the invention of Dong to allow direct manipulation of virtual objects and sharing of the virtual objects because doing so would enhance user’s experience by allowing engagements between users while viewing live streaming content.
The modified Dong does not teach in a case that the body action is a facial expression or a head action, recognizing a key point in a face of the first anchor object through a facial action recognition system to determine an action track of the key point in the face of the first anchor object to acquire the head action of the first anchor object and the action direction indicated by the head action.
However, Kaliouby teaches in a case that the body action is a facial expression or a head action, recognizing a key point in a face of the first anchor object through a facial action recognition system to determine an action track of the key point in the face of the first anchor object to acquire the head action of the first anchor object and the action direction indicated by the head action ([0040]…The facial landmarks that are detected during the performing of the facial landmark detection can be translated into a representative icon. Fig. 5, [0065]…any number of AUs and/or facial muscle movements can correspond to an emoji. One or more emoji can be selected to represent a given facial expression, for example. [0084]… FIG. 13 shows live streaming of social video in a social media context. The live streaming can be used within a deep learning environment. Analysis of live streaming of social video can be performed using data collected from evaluating images to determine a facial expression and/or mental state. A plurality of images of an individual viewing an electronic display can be received. A face can be identified in an image, based on the use of classifiers. The plurality of images can be evaluated to determine facial expressions and/or mental states of the individual.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the teaching of Kaliouby in the invention of the modified Dong to include facial expressions of the users to identify one or more virtual objects/emojis represent a given facial expression because doing so would enhance user’s experience by allowing engagements between users while viewing live streaming content.
Claim(s) 6-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dong et al. (“Dong”, Pub. No. 2024/0028189), TAKEDA et al. (“Takeda”, Pub. No. US 2018/0300037), el Kaliouby et al. (“Kaliouby”, Pub. No. 2017/0098122), and Mori (Pub. No. 2007/0036128).
Per claim 6, the modified Dong teaches the method for live streaming interaction according to claim 5, wherein said determining the interactive special effect based on the facial expression of the first anchor object and the expression special effect as described above but does not teach generating, based on the facial expression of the first anchor object and the expression special effect, an interactive special effect comprising the interactive element, wherein the interactive element is a facial part of the first anchor object, and the interactive special effect is that the facial part of the first anchor object moves from a face of the first anchor object to a body of the second anchor object in the second live-streaming window.
However, Mori teaches the facial expression of the first anchor object and the expression special effect, an interactive special effect comprising the interactive element, wherein the interactive element is a facial part of the first anchor object, and the interactive special effect is that the facial part of the first anchor object moves from a face of the first anchor object to a body of the second anchor object in the second live-streaming window ([0035] Through this, feelings which arise spontaneously can be more actively communicated to a partner by moving one part of an image of a person, as in, for example, a wink, a smile, a blown kiss, and the like, and causing it to be displayed on the screen. [0170]…FIG. 15 is a diagram showing a communications sequence in the sense-of-connection communications carried out between communications terminals 30a and 30b. Note that here, the face detecting unit 18 and the face recognizing unit 19 are included, and the location of facial parts (eyes, nose, mouth, etc) in an image in the memory unit 16 can be recognized; and actions such as winking, smiling, and blowing a kiss can be executed on the receiving side depending on the location that is tapped. [0174]…When the wife Usagi, who notices the display of the ripple action, taps the left eye of the screen in which the husband Hiromi's photograph is displayed (S37), the packet generating unit 32 of the communications terminal 30b generates a request packet describing the action and tapped location--in other words, the action of the left eye winking--in the communications terminal 30a, and sends the request packet to the communications terminal 30a (S35). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the teaching of Mori in the invention of the modified Dong to include active communication to a user by moving one part of an image of a person, as in, for example, a wink, a smile, a blown kiss, and the like, and causing it to be displayed on the screen, because doing so would enhance user’s experience by allowing engagements between users while viewing live streaming content.
Per claim 7, the modified Dong teaches the method for live streaming interaction according to claim 5, wherein said determining the interactive special effect based on the facial expression of the first anchor object and the expression special effect comprises: using an expression special effect corresponding to the facial expression of the first anchor object as an interactive element in the interactive special effect, wherein the interactive special effect is that the expression special effect moves from a face of the first anchor object to a body of the second anchor object in the second live-streaming window ([0035] Through this, feelings which arise spontaneously can be more actively communicated to a partner by moving one part of an image of a person, as in, for example, a wink, a smile, a blown kiss, and the like, and causing it to be displayed on the screen. [0170]…FIG. 15 is a diagram showing a communications sequence in the sense-of-connection communications carried out between communications terminals 30a and 30b. Note that here, the face detecting unit 18 and the face recognizing unit 19 are included, and the location of facial parts (eyes, nose, mouth, etc) in an image in the memory unit 16 can be recognized; and actions such as winking, smiling, and blowing a kiss can be executed on the receiving side depending on the location that is tapped. [0174]…When the wife Usagi, who notices the display of the ripple action, taps the left eye of the screen in which the husband Hiromi's photograph is displayed (S37), the packet generating unit 32 of the communications terminal 30b generates a request packet describing the action and tapped location--in other words, the action of the left eye winking--in the communications terminal 30a, and sends the request packet to the communications terminal 30a (S35). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the teaching of Mori in the invention of the modified Dong to include active communication to a user by moving one part of an image of a person, as in, for example, a wink, a smile, a blown kiss, and the like, and causing it to be displayed on the screen, because doing so would enhance user’s experience by allowing engagements between users while viewing live streaming content.
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dong et al. (“Dong”, Pub. No. 2024/0028189), TAKEDA et al. (“Takeda”, Pub. No. US 2018/0300037), and Friedman (Pat. No. 11,138,835).
Per claim 14, the modified Dong teaches the method for live streaming interaction according to claim 1 as described above, but does not teach displaying a preset quantity of virtual bricks and the interactive element; displaying, based on an action direction of the preset action of the first anchor object, a movement of the first live-streaming window according to the action direction; displaying, in a case that the first live-streaming window touches the interactive element, that the interactive element rebounds from the first live-streaming window; and displaying, in a case that the interactive element touches the virtual bricks, a disappearing special effect of the virtual bricks.
However, Friedman teaches displaying a preset quantity of virtual bricks and the interactive element, displaying, based on an action direction of the preset action of the first anchor object, a movement of the first live-streaming window according to the action direction, displaying, in a case that the first live-streaming window touches the interactive element, that the interactive element rebounds from the first live-streaming window, and displaying, in a case that the interactive element touches the virtual bricks, a disappearing special effect of the virtual bricks (fig. 3; col. 10, line 63 – col. 11, line 8… The ball 301 bounces area the game screen. Note that the game screen typically is surrounded by borders on each side so the ball 301 could not leave the game screen (upon hitting each border 310 (top, left, right) the ball 301 would bounce). In another embodiment, there would be a border on the left side, top, and right side of the game screen but the bottom would not have a border and thus if the ball fell to the bottom (without the paddle contacting the ball) then the ball would be lost (and to continue a new ball launch would have to be initiated). A plurality of bricks 300 are all available for the ball 301 hit. When each brick out of the plurality of bricks 300 is hit (contacted) by the ball 301, that brick would be destroyed which removes it from the screen). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the teaching of Friedman in the invention of the modified Dong to include a brick breaker video game because doing so would provide the user with a form of entertainment and enhance usage.
Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dong et al. (“Dong”, Pub. No. 2024/0028189), TAKEDA et al. (“Takeda”, Pub. No. US 2018/0300037), Friedman (Pat. No. 11,138,835), and Qi et al. (“Qi”, Pub. No. 2023/0254399).
Per claim 15, the modified Dong teaches the method for live streaming interaction according to claim 14, but does not displaying, based on an action direction of the preset action of the second anchor object, a movement of the second live-streaming window according to the action direction, wherein the action direction of the preset action of the second anchor object is different from the direction of the preset action of the first anchor object; displaying, in a case that the interactive element touches the second live-streaming window, that the interactive element rebounds from the second live-streaming window; and displaying, in a case that the interactive element touches the virtual bricks, a disappearing special effect of the virtual bricks.
Qi teaches displaying, based on an action direction of the preset action of the second anchor object, a movement of the second live-streaming window according to the action direction, wherein the action direction of the preset action of the second anchor object is different from the direction of the preset action of the first anchor object, displaying, in a case that the interactive element touches the second live-streaming window, that the interactive element rebounds from the second live-streaming window and displaying, in a case that the interactive element touches the virtual bricks, a disappearing special effect of the virtual bricks ([0148]… When the ball control 51 touches a sub-grid block in the grid block container control 500 or a sub-grid block in the grid block container control 501 on respective interfaces, the sub-grid block that is in the grid block container control 500 or the sub-grid block that is in the grid block container control 501 and that is touched by the ball control 51 disappears. [0151]…It should be noted that users of the two devices: the smart TV 20 and the mobile phone 10, respectively operate the slider control 52 and the slider control 53 that belong to the users, and cannot operate a slider control of the other party. For example, when the user of the smart TV 20 operates the slider control 52 and the slider control 52 collides with the ball control 51, the ball control 51 bounces back, and when the ball control 51 collides with the grid block container control 500, the ball game scores for the user of the smart TV 20 and displays a score in the scoring control. When the user of the mobile phone 10 operates the slider control 53 and the slider control 53 collides with the ball control 51, the ball control 51 bounces back, and when the ball control 51 collides with the grid block container control 501, the ball game scores for the user of the mobile phone 10. A user of a device with a higher score within specific time wins.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the teaching of Qi in the invention of the modified Dong to include multiplayer brick breaker game because doing so would enhance the user’s experience by allowing users of multiple devices to play game.
Inquiries
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THANH T VU whose telephone number is (571)272-4073. The examiner can normally be reached M-F: 7AM - 3:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fred Ehichioya can be reached at (571) 272-4034. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/THANH T VU/ Primary Examiner, Art Unit 2179