Prosecution Insights
Last updated: April 19, 2026
Application No. 18/320,508

METHOD AND APPARATUS FOR VIDEO GENERATION AND DISPLAYING, DEVICE, AND MEDIUM

Final Rejection §103
Filed
May 19, 2023
Examiner
TRUONG, KARL DUC
Art Unit
2614
Tech Center
2600 — Communications
Assignee
BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.
OA Round
4 (Final)
52%
Grant Probability
Moderate
5-6
OA Rounds
2y 7m
To Grant
83%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
15 granted / 29 resolved
-10.3% vs TC avg
Strong +31% interview lift
Without
With
+31.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
45 currently pending
Career history
74
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
85.3%
+45.3% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
2.1%
-37.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 29 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This action is in response to the amendment filed on 29th December, 2025. Claims 1, 8, 11, and 19 have been amended. Claims 5 and 15 have been cancelled. Claims 1-4, 6-14, and 16-19 remain rejected in the application. Response to Arguments Applicant's arguments with respect to Claims 1, 8, 11, and 19 filed on 29th December, 2025, with respect to the rejection under 35 U.S.C. § 103, regarding that the prior art does not teach the limitation(s): "the object data comprises object information of an object, wherein the object is an object who requests to generate the virtual gift video; and the object information comprises an object identifier of the object or an object nickname of the object" and "adding the object information comprising the object identifier of the object or the object nickname of the object, to a preset position of each of the avatar images based on a preset information addition method, to generate the fused avatar images" have been fully considered, but are moot because of new grounds for rejection. It has now been taught by the combination of Fukuda and Lin. Regarding arguments to Claims 2-4, 6-7, 8-10, 12-14, and 16-18, they directly/indirectly depend on independent Claims 1, 8, 11, and 19 respectively. Applicant does not argue anything other than independent Claims 1, 8, 11, and 19. The limitations in those claims, in conjunction with combination, was previously established as explained. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 6-9, 11-12, 16-17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Fukuda et al. (US 20200413144 A1, previously cited), hereinafter referenced as Fukuda, in view of Lin et al. (CN 108900858 A), hereinafter referenced as Lin. Regarding Claim 1, Fukuda discloses a video generation method (Fukuda, [0052]: teaches a video generator 30B executing a process <read on video generation method>), comprising: receiving a generation request carrying object data from a first target device (Fukuda, [0050]: teaches a distribution manager 30A storing various types of data and requests received from user device 12 <read on first target device>, where requests can be an object that is requested <read on generation request carrying object data> to be displayed on the screen of the user device 12); fusing the object data with a basic avatar model in response to the generation request, to obtain fused avatar images (Fukuda, [0103]: teaches an avatar object 111 <read on basic avatar model> with a gift button 132; [0105]: teaches displaying a gift list 135, which contains a list of gift accessories that can then be attached to the avatar object 111 <read on obtaining fused avatar images> as shown in FIGS. 15A-15C); PNG media_image1.png 674 428 media_image1.png Greyscale generating a virtual gift video based on the fused avatar images (Fukuda, [0106]: teaches a viewing view 130 video <read on generating virtual gift video> of the avatar object 111 with an attachment object 302 attached to the head <read on fused avatar images>; [0143]: teaches a video generator generating video that includes the avatar object); and sending to the first target device the virtual gift video based on the fused avatar images (Fukuda, [0150]: teaches "when a viewing user starts viewing <read on sending to first target device> the video <read on virtual gift video based on fused avatar images>, the viewing user recognizes the attachment object of the distributing user and identifies the team to which the distributing user belongs"), and displaying, in a target gift tray page of the first target device, the virtual gift video generated based on the fused avatar images (Fukuda, FIG. 7B teaches a gift list on the user device 12 <read on target gift tray page of first target device>; [0106]: teaches a viewing view 130 video <read on generated virtual gift video> of the avatar object 111 with an attachment object 302 attached to the head <read on fused avatar images>), wherein PNG media_image2.png 328 203 media_image2.png Greyscale the object data comprises object information of an object (Fukuda, [0072]: teaches "the gift list 135 displays normal objects 222 included in the gift object information 32E," where "the identification information of the gift object is stored in the possession list information 32D of the distributing user distributing the video"), wherein the object is an object who requests to generate the virtual gift video (Fukuda, [0059]: teaches gift object information 32E, which includes the user IDs of the distributing user and the viewer, where the viewer's user ID serves as "a provision user that has requested to display <read on generate> a gift object"; [0073]: teaches "when an attachment object is requested to be displayed and the distributing user selects the attachment object, the avatar object 111 wearing the attachment object is displayed"); and the object information comprises an object identifier of the object or an object nickname of the object (Fukuda, [0072]: teaches "the identification information <read on object identifier of object> of the gift object is stored in the possession list information 32D of the distributing user distributing the video"); and the fusing the object data with a basic avatar model to obtain a fused avatar image comprises:generating avatar images corresponding to the basic avatar model (Fukuda, [0103]: teaches an avatar object 111 <read on basic avatar model> with a gift button 132; [0105]: teaches displaying a gift list 135, which contains a list of gift accessories that can then be attached to the avatar object 111 <read on obtaining fused avatar images> as shown in FIGS. 15A-15C; [0106]: teaches a viewing view 130 video <read on generating virtual gift video> of the avatar object 111 with an attachment object 302 attached to the head <read on fused avatar images>; [0143]: teaches a video generator generating video that includes the avatar object); and adding the object information comprising the object identifier of the object or the object nickname of the object, to a [[preset]] position of each of the avatar images [[based on a preset information addition method, to generate the fused avatar images]] (Fukuda, [0107]: teaches information of assigned attachment objects being registered to the possession list information, where the attachment object, such as cat ears, is attached <read on adding object information to position of avatar images> to the virtual character, thereby forming a superimposed video <read on generated fused avatar images> as shown in FIG. 15C; Note: it should be noted that paragraph [0091] of the specification states: "the preset position may be any position in the avatar image, which may be set as needed"). However, Fukuda does not expressly disclose adding the object information comprising the object identifier of the object or the object nickname of the object, to a preset position of each of the avatar images based on a preset information addition method, to generate the fused avatar images. Lin discloses adding the object information comprising the object identifier of the object or the object nickname of the object, to a preset position of each of the avatar images based on a preset information addition method, to generate the fused avatar images (Lin, [0102]: teaches a first display module 405 being used to "display the gift animation corresponding to the target virtual gift at a preset position in the display area of the anchor's face"; [0083]: teaches performing a swipe operation that starts from the streamer's face, where a virtual gift animation is played <read on preset information addition method>, such as a hand applying superimposed makeup to the streamer's face; [0090]: teaches types of virtual gifts, such as heart-shaped animations, being superimposed onto the streamer's eyes). Lin is analogous art with respect to Fukuda because they are from the same field of endeavor, namely a virtual gifting system for live streamers. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement associated gift animations based on user gesture controls as taught by Lin into the teaching of Fukuda. The suggestion for doing so would allow for shortcuts gestures for gifting virtual items to the live streamers, thereby improving convenience and the overall user experience. Therefore, it would have been obvious to combine Lin with Fukuda. Regarding Claim 8, Fukuda discloses a video display method (Fukuda, [0051]: teaches a video generator 30B executing processes <read on video display method> in accordance with requests to display gift objects), comprising: obtaining object data, in response to detecting a generation operation inputted by a user (Fukuda, [0077]: teaches a distribution manager 30A determining whether a display request of a gift object <read on obtaining object data> is received from user device 12 <read on detecting generation operation inputted by user>), wherein the object data comprises object information of an object (Fukuda, [0072]: teaches "the gift list 135 displays normal objects 222 included in the gift object information 32E"), wherein the object is an object who requests to generate the virtual gift video (Fukuda, [0059]: teaches gift object information 32E, which includes the user IDs of the distributing user and the viewer, where the viewer's user ID serves as "a provision user that has requested to display <read on generate> a gift object"; [0073]: teaches "when an attachment object is requested to be displayed and the distributing user selects the attachment object, the avatar object 111 wearing the attachment object is displayed"); and the object information comprises an object identifier of the object or an object nickname of the object (Fukuda, [0072]: teaches "the identification information <read on object identifier of object> of the gift object is stored in the possession list information 32D of the distributing user distributing the video"); sending a generation request carrying the object data to a second target device (Fukuda, [0072]: teaches "a display request <read on generation request> including identification information of the object <read on object data> is sent from the user device 12 to the server 13 <read on second target device> and the object is displayed in the video"), wherein the generation request is configured to provide a virtual gift video that is fed back from the second target device (Fukuda, [0103]: teaches an avatar object 111 <read on basic avatar model> with a gift button 132; [0105]: teaches displaying a gift list 135, which contains a list of gift accessories that can then be attached to the avatar object 111 <read on obtaining fused avatar images> as shown in FIGS. 15A-15C; [0078]: teaches a distribution manager receiving a display request from server 13 <read on second target device>) and generated by generating avatar images corresponding to a basic avatar model (Fukuda, [0106]: teaches a viewing view 130 video <read on generating virtual gift video> of the avatar object 111 with an attachment object 302 attached to the head <read on fused avatar images>; [0143]: teaches a video generator generating video that includes the avatar object); and adding the object information comprising the object identifier of the object or the object nickname of the object to a [[preset]] position of each of the avatar images [[based on a preset information addition method, to generate the fused avatar images]] (Fukuda, [0107]: teaches information of assigned attachment objects being registered to the possession list information, where the attachment object, such as cat ears, is attached <read on adding object information to position of avatar images> to the virtual character, thereby forming a superimposed video <read on generated fused avatar images> as shown in FIG. 15C); receiving from the second target device the generated virtual gift video (Fukuda, [0078]: teaches the distribution manager receiving a display request from server 13 <read on second target device>; [0072]: teaches a display request of the avatar object with an attached gift object <read on fused object data with basic avatar video model> which is then sent viewers <read on receiving virtual gift video>); and displaying in a target gift tray page the generated virtual gift video (Fukuda, [0106]: teaches a viewing view 130 video <read on generated virtual gift video> of the avatar object 111 with an attachment object 302 attached to the head <read on fused avatar images>). However, Fukuda does not expressly disclose adding the object information comprising the object identifier of the object or the object nickname of the object to a preset position of each of the avatar images based on a preset information addition method, to generate the fused avatar images. Lin discloses adding the object information comprising the object identifier of the object or the object nickname of the object to a preset position of each of the avatar images based on a preset information addition method, to generate the fused avatar images (Lin, [0102]: teaches a first display module 405 being used to "display the gift animation corresponding to the target virtual gift at a preset position in the display area of the anchor's face"; [0083]: teaches performing a swipe operation that starts from the streamer's face, where the virtual gift animation is played <read on preset information addition method>, such as a hand applying makeup to the streamer's face; [0090]: teaches types of virtual gifts, such as heart-shaped animations, being superimposed onto the streamer's eyes). Lin is analogous art with respect to Fukuda because they are from the same field of endeavor, namely a virtual gifting system for live streamers. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement associated gift animations based on user gesture controls as taught by Lin into the teaching of Fukuda. The suggestion for doing so would allow for shortcuts gestures for gifting virtual items to the live streamers, thereby improving convenience and the overall user experience. Therefore, it would have been obvious to combine Lin with Fukuda. Regarding Claim 11, Fukuda discloses a computing device (Fukuda, FIG. 1 teaches a user device <read on computing device> 12), comprising: PNG media_image3.png 672 418 media_image3.png Greyscale a processor (Fukuda, FIG. 1 teaches the user device including a computer processor 20); a memory configured to store executable instructions (Fukuda, FIG. 1 teaches the user device including a memory 21 and storage 22, where storage 22 includes a video application program 22A <read on store executable instructions>); wherein the processor is configured to read the executable instructions from the memory and execute the executable instructions (Fukuda, FIG. 1 teaches the user device including a computer processor 20, a memory 21, and storage 22, where storage 22 includes a video application program 22A <read on store executable instructions> and is executable by computer processor 20 <read on reading executable instructions>) to receive a generation request carrying object data from a first target device (Fukuda, [0050]: teaches a distribution manager 30A storing various types of data and requests received from user device 12 <read on first target device>, where requests can be an object that is requested <read on generation request carrying object data> to be displayed on the screen of the user device 12); fuse the object data with a basic avatar model in response to the generation request, to obtain fused avatar images (Fukuda, [0103]: teaches an avatar object 111 <read on basic avatar model> with a gift button 132; [0105]: teaches displaying a gift list 135, which contains a list of gift accessories that can then be attached to the avatar object 111 <read on obtaining fused avatar images> as shown in FIGS. 15A-15C); generate a virtual gift video based on the fused avatar images (Fukuda, [0106]: teaches a viewing view 130 video <read on generating virtual gift video> of the avatar object 111 with an attachment object 302 attached to the head <read on fused avatar images>; [0143]: teaches a video generator generating video that includes the avatar object); and send to the first target device the virtual gift video generated based on the fused avatar images (Fukuda, [0150]: teaches "when a viewing user starts viewing <read on sending to first target device> the video <read on virtual gift video based on fused avatar images>, the viewing user recognizes the attachment object of the distributing user and identifies the team to which the distributing user belongs"), and displaying, in a target gift tray page of the first target device, the virtual gift video generated based on the fused avatar images (Fukuda, FIG. 7B teaches a gift list on the user device 12 <read on target gift tray page of first target device>; [0106]: teaches a viewing view 130 video <read on generated virtual gift video> of the avatar object 111 with an attachment object 302 attached to the head <read on fused avatar images>), wherein the object data comprises object information of an object (Fukuda, [0072]: teaches "the gift list 135 displays normal objects 222 included in the gift object information 32E," where "the identification information of the gift object is stored in the possession list information 32D of the distributing user distributing the video"), wherein the object is an object who requests to generate the virtual gift video (Fukuda, [0059]: teaches gift object information 32E, which includes the user IDs of the distributing user and the viewer, where the viewer's user ID serves as "a provision user that has requested to display <read on generate> a gift object"; [0073]: teaches "when an attachment object is requested to be displayed and the distributing user selects the attachment object, the avatar object 111 wearing the attachment object is displayed"); and the object information comprises an object identifier of the object or an object nickname of the object (Fukuda, [0072]: teaches "the identification information <read on object identifier of object> of the gift object is stored in the possession list information 32D of the distributing user distributing the video"); and the object data is fused with a basic avatar model to obtain a fused avatar image by generating avatar images corresponding to the basic avatar model (Fukuda, [0103]: teaches an avatar object 111 <read on basic avatar model> with a gift button 132; [0105]: teaches displaying a gift list 135, which contains a list of gift accessories that can then be attached to the avatar object 111 <read on obtaining fused avatar images> as shown in FIGS. 15A-15C; [0106]: teaches a viewing view 130 video <read on generating virtual gift video> of the avatar object 111 with an attachment object 302 attached to the head <read on fused avatar images>; [0143]: teaches a video generator generating video that includes the avatar object); and adding the object information comprising the object identifier of the object or the object nickname of the object, to a [[preset]] position of each of the avatar images [[based on a preset information addition method, to generate the fused avatar images]] (Fukuda, [0107]: teaches information of assigned attachment objects being registered to the possession list information, where the attachment object, such as cat ears, is attached <read on adding object information to position of avatar images> to the virtual character, thereby forming a superimposed video <read on generated fused avatar images> as shown in FIG. 15C); or obtain object data, in response to detecting a generation operation inputted by a user (Fukuda, [0077]: teaches a distribution manager 30A determining whether a display request of a gift object <read on obtaining object data> is received from user device 12 <read on detecting generation operation inputted by user>), wherein the object data comprises object information of an object (Fukuda, [0072]: teaches "the gift list 135 displays normal objects 222 included in the gift object information 32E"), wherein the object is an object who requests to generate the virtual gift video (Fukuda, [0059]: teaches gift object information 32E, which includes the user IDs of the distributing user and the viewer, where the viewer's user ID serves as "a provision user that has requested to display <read on generate> a gift object"; [0073]: teaches "when an attachment object is requested to be displayed and the distributing user selects the attachment object, the avatar object 111 wearing the attachment object is displayed"); and the object information comprises an object identifier of the object or an object nickname of the object (Fukuda, [0072]: teaches "the identification information <read on object identifier of object> of the gift object is stored in the possession list information 32D of the distributing user distributing the video"); send a generation request carrying the object data to a second target device (Fukuda, [0072]: teaches "a display request <read on generation request> including identification information of the object <read on object data> is sent from the user device 12 to the server 13 <read on second target device> and the object is displayed in the video"), wherein the generation request is configured to provide a virtual gift video that is fed back from the second target device (Fukuda, [0103]: teaches an avatar object 111 <read on basic avatar model> with a gift button 132; [0105]: teaches displaying a gift list 135, which contains a list of gift accessories that can then be attached to the avatar object 111 <read on obtaining fused avatar images> as shown in FIGS. 15A-15C; [0078]: teaches a distribution manager receiving a display request from server 13 <read on second target device>) and generated by generating avatar images corresponding to the basic avatar model (Fukuda, [0106]: teaches a viewing view 130 video <read on generating virtual gift video> of the avatar object 111 with an attachment object 302 attached to the head <read on fused avatar images>; [0143]: teaches a video generator generating video that includes the avatar object) and adding the object information comprising the object identifier of the object or the object nickname of the object, to a [[preset]] position of each of the avatar images [[based on a preset information addition method to generate fused avatar images]] (Fukuda, [0107]: teaches information of assigned attachment objects being registered to the possession list information, where the attachment object, such as cat ears, is attached <read on adding object information to position of avatar images> to the virtual character, thereby forming a superimposed video <read on generated fused avatar images> as shown in FIG. 15C); receive from the second target device the generated virtual gift video (Fukuda, [0078]: teaches the distribution manager receiving a display request from server 13 <read on second target device>; [0072]: teaches a display request of the avatar object with an attached gift object <read on fused object data with basic avatar video model> which is then sent viewers <read on receiving virtual gift video>); and display in a target gift tray page the generated virtual gift video (Fukuda, [0106]: teaches a viewing view 130 video <read on generated virtual gift video> of the avatar object 111 with an attachment object 302 attached to the head <read on fused avatar images>). However, Fukuda does not expressly disclose adding the object information comprising the object identifier of the object or the object nickname of the object, to a preset position of each of the avatar images based on a preset information addition method, to generate the fused avatar images; or adding the object information comprising the object identifier of the object or the object nickname of the object, to a preset position of each of the avatar images based on a preset information addition method to generate fused avatar images. Lin discloses adding the object information comprising the object identifier of the object or the object nickname of the object, to a preset position of each of the avatar images based on a preset information addition method, to generate the fused avatar images (Lin, [0102]: teaches a first display module 405 being used to "display the gift animation corresponding to the target virtual gift at a preset position in the display area of the anchor's face"; [0083]: teaches performing a swipe operation that starts from the streamer's face, where a virtual gift animation is played <read on preset information addition method>, such as a hand applying superimposed makeup to the streamer's face; [0090]: teaches types of virtual gifts, such as heart-shaped animations, being superimposed onto the streamer's eyes); or adding the object information comprising the object identifier of the object or the object nickname of the object, to a preset position of each of the avatar images based on a preset information addition method to generate fused avatar images (Lin, [0102]: teaches a first display module 405 being used to "display the gift animation corresponding to the target virtual gift at a preset position in the display area of the anchor's face"; [0083]: teaches performing a swipe operation that starts from the streamer's face, where the virtual gift animation is played <read on preset information addition method>, such as a hand applying makeup to the streamer's face; [0090]: teaches types of virtual gifts, such as heart-shaped animations, being superimposed onto the streamer's eyes). Lin is analogous art with respect to Fukuda because they are from the same field of endeavor, namely a virtual gifting system for live streamers. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement associated gift animations based on user gesture controls as taught by Lin into the teaching of Fukuda. The suggestion for doing so would allow for shortcuts gestures for gifting virtual items to the live streamers, thereby improving convenience and the overall user experience. Therefore, it would have been obvious to combine Lin with Fukuda. Regarding Claim 19, it recites the limitations that are similar in scope to Claim 11, but in a non-transitory computer-readable storage medium. As shown in the rejection, the combination of Fukuda and Lin discloses the limitations of Claim 11. Additionally, Fukuda discloses a non-transitory computer-readable storage medium storing a computer program (Fukuda, [0011]: teaches a non-transitory computer readable medium that stores a video viewing program <read on computer program>), wherein the computer program, when executed by a processor, causes the processor to (Fukuda, [0011]: teaches a non-transitory computer readable medium that stores a video viewing program, where "when executed by circuitry <read on processor>, causes the circuitry to display a video including an avatar object on a display based on video data received from a server")… Thus, Claim 19 is met by Fukuda according to the mapping presented in the rejection of Claim 1, given the computing device corresponds to a non-transitory computer-readable storage medium. Regarding Claims 2, 9, and 12, the combination of Fukuda and Lin discloses the video generation method, the video display method, and the computing device of Claims 1, 8, and 11 respectively. Additionally, Fukuda further discloses wherein the object data further comprises at least one of an object posture image or a part image of an object (Fukuda, FIG. 15B teaches the gift list displaying a list of normal objects 222 as preview images <read on part image of object>). Regarding Claims 6 and 16, the combination of Fukuda and Lin discloses the video generation method and the computing device of Claims 1 and 11 respectively. Additionally, Fukuda further discloses wherein after sending the virtual gift video to the first target device, the method further comprises: receiving interactive data for a target live room from the first target device (Fukuda, [0059]: teaches gift object information <read on receiving interactive data> being stored for each user, where the gift object information is a gift that can be used in a video application program, such as the gift being displayed on live video after the user obtains a gacha ticket; FIG. 6 teaches an example live video scene of a virtual avatar in a room <read on target live room>; Note: it should be noted that paragraphs [0116]-[0117] of the specification states that "interactive data includes presentation time of the virtual gift video and/or user comment information, user operation information, etc.; in addition, although "target live room" is not expressly stated, it is common in the art for live stream hosts (or anchors) to be captured live in a filming room or set), wherein PNG media_image4.png 328 311 media_image4.png Greyscale the interactive data comprises video information corresponding to the virtual gift video (Fukuda, [0059]: teaches the gift object information 32E <read on interactive data> containing a gift, where it is "an element of the video application program <read on video information> usable in an object or the video application program"); obtaining the virtual gift video corresponding to the video information (Fukuda, [0106]: teaches a viewing view 130 video <read on obtaining virtual gift video> of the avatar object 111 with an attachment object 302 attached to the head <read on fused avatar images>; [0143]: teaches a video generator generating video that includes the avatar object); fusing the virtual gift video with a live video of the target live room to form a fused live video (Fukuda, [0047]: teaches "the display controller 20C may combine video data <read on fusing virtual gift video with live video of target live room> created by the display controller 20C with data received from the server 13 and output a video <read on fused live video> to the display device 28 in accordance with the combined data"); and sending the fused live video to an electronic device associated with the target live room (Fukuda, [0047]: teaches "the display controller 20C may combine video data created by the display controller 20C with data received from the server 13 and output a video <read on fused live video> to the display device 28 <read on electronic device> in accordance with the combined data"). Regarding Claims 7 and 17, the combination of Fukuda and Lin discloses the video generation method and the computing device of Claims 1 and 11 respectively. Additionally, Fukuda further discloses wherein after sending the virtual gift video to the first target device, the method further comprises: receiving interactive data for a target live room from the first target device (Fukuda, [0059]: teaches gift object information <read on receiving interactive data> being stored for each user, where the gift object information is a gift that can be used in a video application program, such as the gift being displayed on live video after the user obtains a gacha ticket; FIG. 6 teaches an example live video scene of a virtual avatar in a room <read on target live room>; Note: it should be noted that paragraphs [0116]-[0117] of the specification states that "interactive data includes presentation time of the virtual gift video and/or user comment information, user operation information, etc.), wherein the interactive data comprises video information corresponding to the virtual gift video (Fukuda, [0059]: teaches the gift object information 32E <read on interactive data> containing a gift, where it is "an element of the video application program <read on video information> usable in an object or the video application program"); and sending gift data to an electronic device associated with the target live room (Fukuda, [0051]: teaches the user sending a display request to video generator <read on electronic device> 30B to display a gift object <read on gift data> on live video), wherein the gift data comprises object information corresponding to the first target device (Fukuda, [0050]: teaches the distribution manager 30A distributing video data generated by video generator 30B, such as a message posted to the video or data of an object <read on object information> requested to be displayed to user device 12) and video information corresponding to the virtual gift video (Fukuda, [0050]: teaches the distribution manager 30A distributing video data generated by video generator 30B, such as a message posted to the video <read on video information> or data of an object requested to be displayed to user device 12). Claims 3-4, 10, 13-14, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Fukuda et al. (US 20200413144 A1, previously cited), hereinafter referenced as Fukuda, in view of Lin et al. (CN 108900858 A), hereinafter referenced as Lin as applied to Claims 2, 9, 12, and 11 above respectively, and further in view of Zhang (US 20210099761 A1, previously cited). Regarding Claims 3 and 13, the combination of Fukuda and Lin discloses the video generation method and the computing device of Claims 2 and 12 respectively. The combination of Fukuda and Lin does not expressly disclose the limitations of Claims 3 and 13; however, Zhang discloses wherein the object data comprises the object posture image (Zhang, [0242]: teaches a virtual item <read on object posture image> being gifted to the virtual avatar of the anchor), and the object posture image comprisesat least one of a first posture image for a first part of an object (Zhang, [0243]: teaches a corresponding relationship between the virtual item and the pose data of the first virtual image <read on first posture image for first part of object>) and a second posture image for a second part of the object (Zhang, [0243]: teaches changing the pose of the first virtual image based on the second pose data <read on second posture image for second part of object>); and the fusing the object data with a basic avatar model to obtain fused avatar images comprises:performing posture transfer on the basic avatar model using the object posture image, to generate the fused avatar images (Zhang, [0243]: teaches changing the pose <read on performing posture transfer> of the first virtual image containing the virtual avatar interacting with the gifted virtual item <read on basic avatar model using object posture image to generate fused avatar images> based on the second pose data). Zhang is analogous art with respect to Fukuda, in view of Lin because they are from the same field of endeavor, namely giving gift items to virtual avatars on live stream video. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to have virtual object gift data include virtual item information as taught by Zhang into the teaching of Fukuda, in view of Lin. The suggestion for doing so would allow the system to update the virtual avatar poses based on both the anchor and the virtual item, thereby yielding predictable results. Therefore, it would have been obvious to combine Zhang with Fukuda, in view of Lin. Regarding Claims 4 and 14, the combination of Fukuda and Lin discloses the video generation method and the computing device of Claims 2 and 12 respectively. Additionally, Fukuda further discloses wherein the object data comprises the part image of the object (Fukuda, FIG. 15B teaches the gift list displaying a list of normal objects 222 as preview images <read on part image of object>); and [[the fusing the object data with a basic avatar model to obtain a fused avatar image comprises:updating the basic avatar model using the part image of the object to generate the fused avatar images.]] However, the combination of Fukuda and Lin does not expressly disclose the fusing the object data with a basic avatar model to obtain a fused avatar image comprises:updating the basic avatar model using the part image of the object to generate the fused avatar images. Zhang discloses the fusing the object data with a basic avatar model to obtain a fused avatar image comprises:updating the basic avatar model using the part image of the object to generate the fused avatar images (Zhang, [0243]: teaches changing the pose <read on updating basic avatar model> of the first virtual image containing the virtual avatar interacting with the gifted virtual item <read on basic avatar model using object posture image to generate fused avatar images> based on the second pose data). Zhang is analogous art with respect to Fukuda, in view of Lin because they are from the same field of endeavor, namely giving gift items to virtual avatars on live stream video. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to have virtual object gift data include virtual item information as taught by Zhang into the teaching of Fukuda, in view of Lin. The suggestion for doing so would allow the system to update the virtual avatar poses based on both the anchor and the virtual item, thereby yielding predictable results. Therefore, it would have been obvious to combine Zhang with Fukuda, in view of Lin. Regarding Claims 10 and 18, the combination of Fukuda and Lin discloses the video generation method and the computing device of Claims 9 and 11 respectively. Additionally, Fukuda further discloses wherein the object data comprises [[at least one of an object posture image and]] a part image of an object (Fukuda, FIG. 15B teaches the gift list displaying a list of normal objects 222 as preview images <read on part image of object>), and [[the object posture image comprises at least one of a first posture image for a first part of the object and]] [[a second posture image for a second part of the object;]] the obtaining object data, in response to detecting a generation operation inputted by a user comprises:obtaining a user video which contains the object data and is inputted by the user, in response to detecting the generation operation inputted by a user (Fukuda, [0143]: teaches the server receiving "a display request including identification information of an object <read on detecting generation operation inputted by user> from the server 13 and display the object based on the display request," which corresponds to user device 12, where it generates "a video including the avatar object <read on obtaining user video containing object data>, and display the video in the display device 28 of the user device 12"). However, the combination of Fukuda and Lin does not expressly disclose at least one of an object posture image and the object posture image comprises at least one of a first posture image for a first part of the object and a second posture image for a second part of the object. Zhang discloses at least one of an object posture image (Zhang, [0242]: teaches a virtual item <read on object posture image> being gifted to the virtual avatar of the anchor) and the object posture image comprises at least one of a first posture image for a first part of the object (Zhang, [0243]: teaches a corresponding relationship between the virtual item and the pose data of the first virtual image <read on first posture image for first part of object>) and a second posture image for a second part of the object (Zhang, [0243]: teaches changing the pose of the first virtual image based on the second pose data <read on second posture image for second part of object>). Zhang is analogous art with respect to Fukuda, in view of Lin because they are from the same field of endeavor, namely giving gift items to virtual avatars on live stream video. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to have virtual object gift data include virtual item information as taught by Zhang into the teaching of Fukuda, in view of Lin. The suggestion for doing so would allow the system to update the virtual avatar poses based on both the anchor and the virtual item, thereby yielding predictable results. Therefore, it would have been obvious to combine Zhang with Fukuda, in view of Lin. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Alvarez (US 20190080375 A1) discloses a chat and social networking service that allows for exchanging virtual gifts; and Jeong et al. (US 20160035074 A1) discloses a method for applying effects to regions of interest utilizing preset information. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KARL TRUONG whose telephone number is (703)756-5915. The examiner can normally be reached 7:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.D.T./Examiner, Art Unit 2614 /KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

May 19, 2023
Application Filed
Feb 10, 2025
Non-Final Rejection — §103
May 19, 2025
Response Filed
May 27, 2025
Final Rejection — §103
Sep 05, 2025
Request for Continued Examination
Sep 08, 2025
Response after Non-Final Action
Sep 16, 2025
Non-Final Rejection — §103
Dec 29, 2025
Response Filed
Jan 13, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573149
DATA PROCESSING METHOD AND APPARATUS, DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 10, 2026
Patent 12561875
ANIMATION FRAME DISPLAY METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12494013
AUTODECODING LATENT 3D DIFFUSION MODELS
2y 5m to grant Granted Dec 09, 2025
Patent 12456258
SYSTEMS AND METHODS FOR GENERATING A SHADOW MESH
2y 5m to grant Granted Oct 28, 2025
Patent 12444020
FLEXIBLE IMAGE ASPECT RATIO USING MACHINE LEARNING
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
52%
Grant Probability
83%
With Interview (+31.0%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 29 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month