Prosecution Insights
Last updated: April 19, 2026
Application No. 18/687,279

VIDEO LIVE STREAM METHOD, SYSTEM AND COMPUTER STORAGE MEDIUM

Final Rejection §103
Filed
Feb 27, 2024
Examiner
EKPO, NNENNA NGOZI
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
Hangzhou Alicloud Feitian Information Technology Co. Ltd.
OA Round
2 (Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
92%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
420 granted / 589 resolved
+13.3% vs TC avg
Strong +21% interview lift
Without
With
+20.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
24 currently pending
Career history
613
Total Applications
across all art units

Statute-Specific Performance

§101
9.7%
-30.3% vs TC avg
§103
48.2%
+8.2% vs TC avg
§102
17.9%
-22.1% vs TC avg
§112
14.8%
-25.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 589 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA Response to Arguments Applicant’s arguments with respect to claims 1, 11 and 14 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-5 and 11-14 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (U.S. Pub. 2023/0162451) in view of You et al. (U.S. Pub. No. 2022/0210520) and further in view of Whetsel (U.S. Pub. No. 2011/0153712). Regarding claim 1, Wang et al. discloses a video live stream method, applied to a cloud live stream service platform, comprising: receiving a trigger instruction from a client terminal for instructing to perform a live stream using a virtual human streamer (see paragraph 0005; receiving a live room entry instruction sent by a user, and determining a target live room and target avatar information of the user based on the live room entry instruction); performing, according to the trigger instruction, resource scheduling to acquire a cloud resource, wherein the cloud resource at least includes a first service resource for performing live stream rendering including that for the virtual human streamer (see paragraphs 0049-0051; adjusting the initial avatar information based on the avatar information adjustment instruction to obtain the target avatar information, where the target avatar information includes a plurality of pieces of body image information) and a second service resource for generating a video stream according to a result of the live stream rendering (see paragraph 0052; After the initial avatar information is adjusted based on the avatar information adjustment instruction, the target avatar information is obtained); performing live stream rendering by using the first service resource based on three- dimensional data of the virtual human streamer and scene information of a scene to be live streamed (see paragraph 0072; 3D model information of an avatar and target avatar is presented in the virtual scene); generating a video stream using the second service resource according to a picture generated by the live stream rendering and an audio corresponding to the scene to be live streamed (see paragraph 0039); and pushing the video stream to a live stream room to be live streamed by using the virtual human streamer (see paragraphs 0077-0078). However, Wang et al. is silent as to resource scheduling to acquire a cloud resource. You et al. discloses resource scheduling to acquire a cloud resource (see paragraph 0033, 0078, 0133, 0189). It would have been obvious to a skilled artisan before the effective filing date of the claimed invention to modify the system of Wang et al. with the teachings of You et al., the motivation being to improve system utilization. However, Wang et al. and You et al. are silent as to wherein the first service resource and the second service resource use a same virtual container component resource. Whetsel discloses wherein the first service resource and the second service resource use a same virtual container component resource (see paragraph 0029). It would have been obvious to a skilled artisan before the effective filing date of the claimed invention to modify the system of Wang et al. and You et al. with the teachings of Whetsel, the motivation being to exchange information in a controlled, secure environment. Regarding claim 11, claim 11 is rejected for the same reason set forth in the rejection of claim 1. Regarding claim 14, claim 14 is rejected for the same reason set forth in the rejection of claim 1. Regarding claim 3, Wang et al., You et al. and Whetsel discloses everything claimed as applied above (see claim 1). Wang et al. discloses wherein after the performing resource scheduling to acquire the cloud resource, the method further comprises: creating a first service process adapted to the first service resource, and initializing a service corresponding to the first service process according to the three-dimensional data of the virtual human streamer (see paragraphs 0049-00510); and creating a second service process adapted to the second service resource, and initializing a service corresponding to the second service process according to an address of the live stream room (see paragraphs 052, 0025, 0103). Regarding claim 4, Wang et al., You et al. and Whetsel discloses everything claimed as applied above (see claim 3). Wang et al. discloses wherein the initializing a service corresponding to the first service process according to the three-dimensional data of the virtual human streamer comprises: initializing the service corresponding to the first service process according to the three- dimensional data of the virtual human streamer and obtained scene information of the scene to be live streamed (see paragraphs 0035, 0072); wherein the scene information of the scene to be live streamed is obtained in the following manner: obtaining the scene information of the scene to be live streamed from a driving engine storing scene information of a plurality of live stream scenes through a pre-established websocket communication connection (see paragraphs 0006, 0020, 035). Regarding claim 5, Wang et al., You et al. and Whetsel discloses everything claimed as applied above (see claim 3). Wang et al. discloses wherein the trigger instruction carries identification information of the live stream room; the address of the live stream room is obtained in advance in the following manners (see paragraph 0010): obtaining the identification information of the live stream room from the trigger instruction (see paragraphs 0010, 0019, 0042-0044); and obtaining the address of the live stream room corresponding to the identification information from a live stream system for managing the live stream room according to the identification information of the live stream room (see paragraphs 0010, 0019, 0042-0044). Regarding claim 12, Wang et al., You et al. and Whetsel discloses everything claimed as applied above (see claim 11). Wang et al. discloses wherein the director system at least comprises a streaming management service module and a driving engine module; the streaming management service module is configured to receive the trigger instruction for instructing to perform the live stream using the virtual human streamer, and apply for the resource to the resource scheduler according to the trigger instruction; and provide the first service resource with the three-dimensional data of the virtual human streamer; and obtain the address of the live stream room to be live streamed by using the virtual human streamer, and provide the address to the second service resource (see paragraphs 0049-00510); and the driving engine module is configured to provide the first service resource with pre- stored scene information of the scene to be live streamed (see paragraphs 0006, 0020, 035). Regarding claim 13, Wang et al., You et al. and Whetsel discloses everything claimed as applied above (see claim 11). Whetsel discloses the first service resource transmits the picture generated by the live stream rendering to the second service resource through a data transmission agent (see paragraphs 0027 and 0029). Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Wang et al., You et al. and Whetsel as applied to claim 1 above, and further in view of Liu et al. (U.S. Pub. No. 2024/0137581). Regarding claim 2, Wang et al., You et al. and Whetsel discloses everything claimed as applied above (see claim 1). Wang et al. discloses wherein the pushing the video stream to a live stream room to be live streamed by using the virtual human streamer comprises: according to a pre-acquired live stream configuration of the client terminal, pushing the video stream to an address corresponding to the live stream room to be live streamed by using the virtual human streamer at time indicated by the configuration (see paragraphs 0025, 0103). However, Wang et al., You et al. and Whetsel are silent as to pre-acquired live stream start time. Liu et al. pre-acquired live stream start time (see paragraph 0047). It would have been obvious to a skilled artisan before the effective filing date of the claimed invention to modify the system of Wang et al., You et al. and Whetsel with the teachings of Liu et al. the motivation being to avoid the difference in the target time range caused by the time difference of different clients. Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al., You et al. and Whetsel as applied to claim 1 above, and further in view of Koh et al. (U.S. Patent No. 11,058,954). Regarding claim 6, Wang et al., You et al. and Whetsel discloses everything claimed as applied above (see claim 1). However, Wang et al., You et al. and Whetsel are silent as to wherein the performing resource scheduling to acquire a cloud resource comprises: selecting a virtual container component resource to be used from available virtual container component resources in the cloud; and allocating the virtual container component resource to be used as the first service resource for the live stream rendering including that for the virtual human streamer and the second service resource for generating the video stream according to the result of live stream rendering. Koh et al. discloses wherein the performing resource scheduling to acquire a cloud resource comprises: selecting a virtual container component resource to be used from available virtual container component resources in the cloud (see col. 13, lines 10-38 and figs. 5A-5D); and allocating the virtual container component resource to be used as the first service resource for the live stream rendering including that for the virtual human streamer and the second service resource for generating the video stream according to the result of live stream rendering (see col. 13, lines 10-38 and figs. 5A-5D). It would have been obvious to a skilled artisan before the effective filing date of the claimed invention to modify the system of Wang et al., You et al. and Whetsel, with the teachings of Koh et al. the motivation being to provide efficiency. Regarding claim 16, Wang et al., You et al. and Whetsel discloses everything claimed as applied above (see claim 2). However, Wang et al., You et al. and Whetsel are silent as to wherein the performing resource scheduling to acquire a cloud resource comprises: selecting a virtual container component resource to be used from available virtual container component resources in the cloud; and allocating the virtual container component resource to be used as the first service resource for the live stream rendering including that for the virtual human streamer and the second service resource for generating the video stream according to the result of live stream rendering. Koh et al. discloses wherein the performing resource scheduling to acquire a cloud resource comprises: selecting a virtual container component resource to be used from available virtual container component resources in the cloud (see col. 13, lines 10-38 and figs. 5A-5D); and allocating the virtual container component resource to be used as the first service resource for the live stream rendering including that for the virtual human streamer and the second service resource for generating the video stream according to the result of live stream rendering (see col. 13, lines 10-38 and figs. 5A-5D). It would have been obvious to a skilled artisan before the effective filing date of the claimed invention to modify the system of Wang et al., You et al. and Whetsel, with the teachings of Koh et al. the motivation being to provide efficiency. Claims 7-9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al., You et al. and Whetsel as applied to claim 1 above, and further in view of Zhang (U.S. Pub. No. 2023/0042654). Regarding claim 7, Wang et al., You et al. and Whetsel discloses everything claimed as applied above (see claim 1). However, Wang et al., You et al. and Whetsel are silent as to wherein the generating a video stream using the second service resource according to a picture generated by the live stream rendering and an audio corresponding to the scene to be live streamed comprises: respectively obtaining the picture generated by the live stream rendering and the audio corresponding to the scene to be live streamed; and performing audio-video multiplexing operation by using the second service resource based on the picture and the audio, and obtaining the video stream according to an operation result. Zhang discloses wherein the generating a video stream using the second service resource according to a picture generated by the live stream rendering and an audio corresponding to the scene to be live streamed comprises: respectively obtaining the picture generated by the live stream rendering and the audio corresponding to the scene to be live streamed (see abstract, paragraphs 0005, 0007); and performing audio-video multiplexing operation by using the second service resource based on the picture and the audio, and obtaining the video stream according to an operation result (see paragraphs 0008, 0067). It would have been obvious to a skilled artisan before the effective filing date of the claimed invention to modify the system of Wang et al., You et al. and Whetsel, with the teachings of Zhang the motivation being to ensure audio and video are in synchronization. Regarding claim 8, Wang et al., You et al., Whetsel and Zhang discloses everything claimed as applied above (see claim 7). Zhang discloses wherein the respectively obtaining the picture generated by the live stream rendering and the audio corresponding to the scene to be live streamed comprises: obtaining the picture transmitted by the first service resource through a data transmission agent and generated by the live stream rendering; and acquiring the audio generated after voice conversion of a scene text corresponding to the scene to be live streamed (see paragraphs 0032, 0043 and fig. 9). Regarding claim 9, Wang et al., You et al., Whetsel and Zhang discloses everything claimed as applied above (see claim 7). Wang et al. discloses wherein, the method further comprises: obtaining information of an interactive object in the scene to be live streamed (see paragraph 0073); and carrying the information of the interactive object with supplementary enhancement information (see paragraph 0101). Zhang discloses the performing audio-video multiplexing operation by using the second service resource based on the picture and the audio (see paragraphs 0008, 0067), and obtaining the video stream according to the operation result comprises: performing the audio-video multiplexing operation on the picture, the audio and the supplementary enhancement information by using the second service resource (see paragraphs 0008, 0067); and obtaining the video stream according to the operation result (see paragraphs 0008, 0067). Regarding claim 18, Wang et al., You et al. and Whetsel discloses everything claimed as applied above (see claim 1). However, Wang et al., You et al. and Whetsel are silent as to wherein the generating a video stream using the second service resource according to a picture generated by the live stream rendering and an audio corresponding to the scene to be live streamed comprises: respectively obtaining the picture generated by the live stream rendering and the audio corresponding to the scene to be live streamed; and performing audio-video multiplexing operation by using the second service resource based on the picture and the audio, and obtaining the video stream according to an operation result. Zhang discloses wherein the generating a video stream using the second service resource according to a picture generated by the live stream rendering and an audio corresponding to the scene to be live streamed comprises: respectively obtaining the picture generated by the live stream rendering and the audio corresponding to the scene to be live streamed (see abstract, paragraphs 0005, 0007); and performing audio-video multiplexing operation by using the second service resource based on the picture and the audio, and obtaining the video stream according to an operation result (see paragraphs 0008, 0067). It would have been obvious to a skilled artisan before the effective filing date of the claimed invention to modify the system of Wang et al., You et al. and Whetsel, with the teachings of Zhang the motivation being to ensure audio and video are in synchronization. Claims 10, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al., You et al. and Whetsel as applied to claims 3, 4, 5 above, and further in view of Feng (U.S. Pub. No. 2022/0286718). Regarding claims 10, 19 and 20, Wang et al., You et al. and Whetsel discloses everything claimed as applied above (see claims 3, 4 and 5). However, Wang et al., You et al. and Whetsel are silent as to wherein the pushing the video stream to the live stream room to be live streamed by using the virtual human streamer comprises: pushing the video stream to a live stream CDN corresponding to the address based on the address of the live stream room obtained when initializing the service corresponding to the second service process. Feng discloses wherein the pushing the video stream to the live stream room to be live streamed by using the virtual human streamer comprises: pushing the video stream to a live stream CDN corresponding to the address based on the address of the live stream room obtained when initializing the service corresponding to the second service process (see paragraphs 0061, 0069, 0083, 0085, 0146). It would have been obvious to a skilled artisan before the effective filing date of the claimed invention to modify the system of Wang et al., You et al. and Whetsel, with the teachings of Feng the motivation being to provide faster content delivery. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Wang et al., You et al., Whetsel and Liu et al. as applied to claim 1 above, and further in view of Koh et al. (U.S. Patent No. 11,058,954). Regarding claim 15, Wang et al., You et al., Whetsel and Liu et al. discloses everything claimed as applied above (see claim 2). However, Wang et al., You et al., Whetsel and Liu et al. are silent as to wherein the performing resource scheduling to acquire a cloud resource comprises: selecting a virtual container component resource to be used from available virtual container component resources in the cloud; and allocating the virtual container component resource to be used as the first service resource for the live stream rendering including that for the virtual human streamer and the second service resource for generating the video stream according to the result of live stream rendering. Koh et al. discloses wherein the performing resource scheduling to acquire a cloud resource comprises: selecting a virtual container component resource to be used from available virtual container component resources in the cloud (see col. 13, lines 10-38 and figs. 5A-5D); and allocating the virtual container component resource to be used as the first service resource for the live stream rendering including that for the virtual human streamer and the second service resource for generating the video stream according to the result of live stream rendering (see col. 13, lines 10-38 and figs. 5A-5D). It would have been obvious to a skilled artisan before the effective filing date of the claimed invention to modify the system of Wang et al., You et al., Whetsel and Liu et al., with the teachings of Koh et al. the motivation being to provide efficiency. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Wang et al., You et al., Whetsel and Liu et al. as applied to claim 1 above, and further in view of Zhang (U.S. Pub. No. 2023/0042654). Regarding claim 17, Wang et al., You et al., Whetsel and Liu et al. discloses everything claimed as applied above (see claim 1). However, Wang et al., You et al., Whetsel and Liu et al. are silent as to wherein the generating a video stream using the second service resource according to a picture generated by the live stream rendering and an audio corresponding to the scene to be live streamed comprises: respectively obtaining the picture generated by the live stream rendering and the audio corresponding to the scene to be live streamed; and performing audio-video multiplexing operation by using the second service resource based on the picture and the audio, and obtaining the video stream according to an operation result. Zhang discloses wherein the generating a video stream using the second service resource according to a picture generated by the live stream rendering and an audio corresponding to the scene to be live streamed comprises: respectively obtaining the picture generated by the live stream rendering and the audio corresponding to the scene to be live streamed (see abstract, paragraphs 0005, 0007); and performing audio-video multiplexing operation by using the second service resource based on the picture and the audio, and obtaining the video stream according to an operation result (see paragraphs 0008, 0067). It would have been obvious to a skilled artisan before the effective filing date of the claimed invention to modify the system of Wang et al., You et al., Whetsel and Liu et al. with the teachings of Zhang the motivation being to ensure audio and video are in synchronization. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NNENNA NGOZI EKPO whose telephone number is (571)270-1663. The examiner can normally be reached M-W 10:00am - 6:30pm, TH-F 8:00am - 4:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. NNENNA EKPO Primary Examiner Art Unit 2425 /NNENNA N EKPO/Primary Examiner, Art Unit 2425 September 12, 2025.
Read full office action

Prosecution Timeline

Feb 27, 2024
Application Filed
May 30, 2025
Non-Final Rejection — §103
Jul 25, 2025
Response Filed
Sep 12, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598358
DYNAMIC SETTINGS ON A TELEVISION DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12593080
System and Method for Analyzing Videos in Real-Time
2y 5m to grant Granted Mar 31, 2026
Patent 12581138
LIVE ROOM VIDEO PLAYBACK
2y 5m to grant Granted Mar 17, 2026
Patent 12574582
PRIVACY-PRESERVING CONTENT DELIVERY
2y 5m to grant Granted Mar 10, 2026
Patent 12574597
SYSTEMS AND METHODS FOR AGGREGATING CONTENT IDENTIFIERS IN A SUPER-INTERFACE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
92%
With Interview (+20.9%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 589 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month