Prosecution Insights
Last updated: April 19, 2026
Application No. 18/665,408

METHOD AND APPARATUS FOR SYNTHESIZED VIDEO STREAM

Non-Final OA §103§DP
Filed
May 15, 2024
Examiner
LANGHNOJA, KUNAL N
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
Set Industries Corporation
OA Round
1 (Non-Final)
43%
Grant Probability
Moderate
1-2
OA Rounds
3y 2m
To Grant
68%
With Interview

Examiner Intelligence

Grants 43% of resolved cases
43%
Career Allow Rate
171 granted / 394 resolved
-14.6% vs TC avg
Strong +24% interview lift
Without
With
+24.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
20 currently pending
Career history
414
Total Applications
across all art units

Statute-Specific Performance

§101
6.5%
-33.5% vs TC avg
§103
52.1%
+12.1% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
16.6%
-23.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 394 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1-6, 8-13 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-11, 14 of U.S. Patent No. 11,172,251. Although the claims at issue are not identical, they are not patentably distinct from each other because both invention are drawn towards generating a synthesized, advertisement-based, video stream and the instant application claim 1 is broader in every aspect than the patent claim and is therefore an obvious variant thereof. Claim 1 corresponds to patent claim 1. Claim 2 corresponds to claims 1-2. Claim 3 corresponds to claim 3. Claim 4 corresponds to claim 4. Claim 5 corresponds to claim 5. Claim 6 corresponds to claim 6. Claim 8 corresponds to claim 7. Claim 9 corresponds to claim 8. Claim 10 corresponds to claim 9. Claim 11 corresponds to claim 10. Claim 12 corresponds to claim 11. Claim 13 corresponds to claim 14. Claims 1-11 and 13 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-10 and 12 of U.S. Patent No. 11,659,236. Although the claims at issue are not identical, they are not patentably distinct from each other because both invention are drawn towards generating a synthesized, advertisement-based, video stream and the instant application claim 1 is broader in every aspect than the patent claim and is therefore an obvious variant thereof. Claim 1 corresponds to patent claim 1. Claim 2 corresponds to claims 1-2. Claim 3 corresponds to claim 2. Claim 4 corresponds to claim 3. Claim 5 corresponds to claim 4. Claim 6 corresponds to claim 5. Claim 7 corresponds to claim 6. Claim 8 corresponds to claim 7. Claim 9 corresponds to claim 8. Claim 10 corresponds to claim 9. Claim 11 corresponds to claim 10. Claim 13 corresponds to claim 12. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Su et al (US PG Pub No. 2012/0154513), in view of Du et al (US PG Pub No. 2017/0287226). Regarding claim 1, Su et al teaches a method for providing a synthesized video stream of a virtual scene (Abstract, Figure 3), comprising: receiving a plurality of digital images or video streams (hereinafter a "plurality of video streams") each essentially consisting of a representative image of a person (i.e. retrieve video information of participants from each of the incoming video streams) (Figures 1 and 3; Para. 0018, 0036); receiving a plurality of video streams each comprising a place (i.e. at least one of the plurality of incoming video streams 160-a may comprise a panoramic video stream) (Para. 0023); receiving a plurality of video streams (Figures 1, 3, 4B); from the received plurality of video streams each comprising a place, a video stream comprising a place (i.e. at least one of the plurality of incoming video streams 160-a may comprise a panoramic video stream) (Para. 0023); from the received plurality of video streams (Figures 1, 3, 4B); and combining the plurality of video streams each essentially consisting of the representative image of the person with the selected video stream comprising the place into the synthesized video stream of the virtual scene for transmission to an end-user device via which to display the synthesized video stream of the virtual scene (i.e. generate a seamless virtual circular video stream from the video information comprising a virtual circular image having a composite of participants in a virtual room) (Figures 1, 3, 4B; Abstract, Para. 0022, 0026, 0029, 0036-38). The reference is unclear with respect to stream comprising a thing, selecting, a video stream comprising a place, the selected video stream, selecting a video stream comprising a thing, the selected video stream comprising the thing. In similar field of endeavor, Du et al teaches concept stream comprising a thing (i.e. secondary signals include one or more advertisement elements ) (Para. 0081-84), selecting, a video stream comprising a place, the selected video stream (Fig. 3A, 4A; Para. 0085, 0137-140 and 0152), selecting a video stream comprising a thing, the selected video stream comprising the thing (i.e. the extracted real life object and the virtual environment are integrated or combined to render images or videos of a real life object within the virtual environment) (Para. 0081-84). Therefore, it would have been obvious to one of ordinary skill in the art to modify the reference before the effectively filing date of the claimed invention for the purpose of creating an online conference environment integrated with advertisements to reproduce the experience of a face-to-face meeting anywhere. Regarding claim 2, Su and Du, the combination teaches combining the plurality of video streams each essentially consisting of the representative image of the person (i.e. (i.e. retrieve video information of participants from each of the incoming video streams) with the selected video stream comprising the place and with the selected video stream comprising the thing into the synthesized video stream (i.e. Su: secondary signals include one or more advertisement elements) (Su: Figures 1, 3, 4B; Du: Figures 3A, 4A; Para. 0085, 0138-140, 0152), comprises: selecting a location within the selected video stream comprising the place in which to display the selected video stream comprising the thing (i.e. the extracted real life object and the virtual environment are integrated or combined to render images or videos of a real life object within the virtual environment) (Du: Para. 0081); and combining the plurality of video streams each essentially consisting of the representative image of the person with the selected video stream comprising the place and with the selected video stream comprising the thing (i.e. Du: secondary signals include one or more advertisement elements) displayed at the selected location within the selected video stream comprising the place (i.e. generate a seamless virtual circular video stream from the video information comprising a virtual circular image having a composite of participants in a virtual room) (Su: Figure 4B; Du: Para. 0081,0085, 0138-140, 0152). Regarding claim 3, Su and Du, the combination teaches combining the plurality of video streams each essentially consisting of the representative image of the person with the selected video stream comprising the place and with the selected video stream comprising the thing displayed at the selected location within the selected video stream (i.e. generate a seamless virtual circular video stream from the video information comprising a virtual circular image having a composite of participants in a virtual room and Du: secondary signals include one or more advertisement elements) (Su: Figures 1, 3 and 4B; Du: Para. 0081, 0085, 0138-140, 0152) comprising the place comprises: normalizing the selected video stream (i.e. compositing operations may include resizing the participants to a similar size in the virtual room, as indicated for the participant 320-2 and corresponding video information 330-2) comprising the thing for display at the selected second location within the selected video stream comprising the place; and combining the plurality of video streams each essentially consisting of the representative image of the person with the selected video stream comprising the place and with the selected and normalized video stream comprising the thing displayed at the second selected location within the selected video stream comprising the place (i.e. generate a seamless virtual circular video stream from the video information comprising a virtual circular image having a composite of participants in a virtual room and Du: secondary signals include one or more advertisement elements) (Su: Para. 0029 and Du: Para. 0075-76, 0126, 0128, 0133, 0137). Regarding claim 4, Su and Du, the combination teaches combining the plurality of video streams each essentially consisting of the representative image of the person with the selected video stream comprising the place and with the selected video stream comprising the thing into the synthesized video stream of the virtual scene (Su: Figures 1, 3, 4B; Du: Para. 0081, 0085, 0138-140, 0152), comprises: selecting a plurality of locations within the selected video stream comprising the place in which to display the plurality of video streams each essentially consisting of the representative image of the person; and combining the selected video stream comprising the place with the plurality of video streams each essentially consisting of the representative image of the person displayed at the selected plurality of locations within the selected video stream comprising the place and with the selected video stream comprising the thing (i.e. compositing operations may include resizing the participants to a similar size in the virtual room, as indicated for the participant 320-2 and corresponding video information 330-2) (Su: Figures 1, 3 and 4B; Abstract, Para. 0013-15 and Du: Figures 3A, 4A; Para. 0085, 0138-140, 0152). Regarding claim 5, Su and Du, the combination teaches combining the selected video stream comprising the place with the plurality of video streams each essentially consisting of the representative image of the person displayed at the selected plurality of locations within the selected video stream comprising the place and with the selected video stream comprising the thing, comprises: normalizing the plurality of video streams each essentially consisting of the representative person for display at the selected plurality of locations within the selected video stream comprising the place; and combining the selected video stream comprising the place with the normalized plurality of video streams each essentially consisting of the representative image of the person displayed at the selected plurality of locations within the selected video stream comprising the place and with the selected video stream comprising the thing (i.e. compositing operations may include resizing the participants to a similar size in the virtual room, as indicated for the participant 320-2 and corresponding video information 330-2) (Su: Figures 1, 3, 4B; Para. 0029 and Du: Para. 0075-76, 0081-84, 0126, 0128, 0133, 0137). Regarding claim 6, Su and Du, the combination teaches receiving the plurality of video streams each essentially consisting of the representative image of the person comprises: receiving a plurality of video streams each comprising one or more persons; and extracting, from each of the plurality of video streams, a portion of the video stream that essentially consists of a representative image of the person (i.e. the extracted real life object and the virtual environment are integrated or combined to render images or videos of a real life object within the virtual environment) (Su: Fig. 1, 3, 4B; Du: Figures 1, 3A, 4A; Para. 0069). Regarding claim 7, Su and Du, the combination teaches receiving input for selecting the video stream comprising the place; and wherein selecting the video stream comprising the place comprises selecting the video stream comprising the place, based on the received input (i.e. a user's selection for particular types of themes can be stored in user preference) (Du: Figures 3A, 4A; Para. 0077, 0137). Regarding claim 8, Su and Du, the combination teaches receiving input for selecting the video stream comprising the thing; and wherein selecting the video stream comprising the thing comprises selecting the video stream comprising the thing, based on the received input (i.e. a user may specifically request a product, a service, a type of product, or a type of service) (Du: Para. 0081-82). Regarding claim 9, Su and Du, the combination teaches obtaining data associated with a person, a representative image of which is included in the received plurality of video streams each essentially consisting of a representative image of a person; and wherein selecting the video stream comprising the thing comprises selecting the video stream comprising the thing based on the obtained data (Su: Figures 1, 3, 4B; Du: Para. 0081-82). Regarding claim 10, Su and Du, the combination teaches obtaining data associated with a person to view, or viewing, the synthesized video stream of the virtual scene transmitted to an end-user device; and wherein selecting the video stream comprising the thing comprises selecting the video stream comprising the thing based on the obtained data (Su: Figures 1, 3 and 4B, Para. 0051; Du: Para. 0081-82, 0149). Regarding claim 11, Su and Du, the combination teaches transmitting the synthesized video stream to the end-user device via which to display the synthesized video stream of the virtual scene; receiving data regarding the synthesized video stream of the virtual scene transmitted to the end-user device (Su: Figures 1, 3, 4B, Para. 0051; Du: Figures 3A, 4A; Para. 0083, 0130); selecting, from the received plurality of video streams, a video stream comprising a new thing, based on the received data (Du: Para. 0081-82, 0149, 0155); and combining the plurality of video streams each essentially consisting of the representative image of the person with the selected video stream comprising the place and with the selected video stream comprising the new thing into a subsequent synthesized video stream of the virtual scene for transmission to an end-user device via which to display the subsequent synthesized, advertisement-based, video stream of the virtual scene (Su: Figures 1, 3 and 4B; Du: Figures 1, 3A, 4A; Para. 0085, 0138-140, 0152). Regarding claim 12, Su and Du, the combination teaches receiving a plurality of video streams each comprising an advertisement; selecting, from the received plurality of video streams each comprising the advertisement, a video stream comprising an advertisement (Du: Para. 0081-83); and wherein combining the plurality of video streams each essentially consisting of the representative image of the person with the selected video stream comprising the place and with the selected video stream comprising the thing into the synthesized video stream of the virtual scene for transmission to an end-user device via which to display the synthesized, advertisement-based, video stream of the virtual scene (Su: Figure 4B; Para. 0051; Du: Figures 3A, 4A; Para. 0085, 0138-140, 0152) comprises combining the plurality of video streams each essentially consisting of the representative image of the person with the selected video stream comprising the place, with the selected video stream comprising the thing, and with the selected video stream comprising the advertisement into a synthesized video stream of the virtual scene for transmission to an end-user device via which to display the synthesized, advertisement-based, video stream of the virtual scene (Su: Figures 1, 3, 4B; Abstract, Para. 0050-51, Du: Figures 1, 3A, 4A; Para. 0081-85, 0138-140, 0152). Regarding claim 13, Su and Du, the combination teaches a method (Su: Abstract, Para. 0013-15 and Du: Abstract), involving video conference participants participating in a video conference stream, comprising: receiving a plurality of digital images or video streams (hereinafter a "plurality of video streams") of video conference participants (Su: Figures 1 and 3 and Du: Figures 3A, 4A; Abstract, Para. 0151); extracting a representative portion of a video stream of each of the video conference participants from the plurality of video streams of the video conference participants (Du: Figures 1, 3A, 4A; Para. 0069); receiving a plurality of video streams of virtual places (Su: Figure 1, 3, 4B and Du: Para. 0073-78); receiving a plurality of video streams of things (Du: Para. 0081-84); selecting one or more of the plurality of video streams of things to display in one of the plurality of video streams of virtual places (Du: Para. 0137); selecting one of the plurality of video streams of the virtual places in which to display the representative portion of the video stream of each of the video conference participants, and in which to display the selected one or more of the plurality of video streams of things (Su: Figures 1, 3, 4B; Para. 0013-15 and Du: Para. 0081-84, 0137); combining into a synthesized video conference stream, the representative portion of the video stream of each of the video conference participants with the selected one of the plurality of video streams of the virtual places and with the selected one or more of the plurality of video streams of the things in such a manner as to display the representative portion of each of the video conference participants in a first location of the selected one of the plurality of video streams of the virtual places, and to display the selected one or more of the plurality of video streams of things in a second location of the selected one of the plurality of video streams of virtual places (Su: Figures 1, 3 and 4B; Abstract, Para. 0013-15 and Du: Figures 3A, 4A; Para. 0081, 0085, 0138-140, 0152); and transmitting the synthesized, advertising-based, video conference stream to end-user devices for display to the video conference participants (Su: Figures 1, 3 and 4B; Abstract, Para. 0013-15 and Du: Figures 3A, 4A). Regarding claim 14, Su and Du, the combination teaches wherein receiving a plurality of video streams each comprising the thing, comprises receiving a plurality of video streams each comprising an advertisement (Du: Para. 0081-84); wherein selecting, from the received plurality of video streams each comprising the thing, the video stream comprising the thing, comprises selecting, from the received plurality of video streams each comprising the advertisement, the video stream comprising the advertisement (Su: Figures 1, 3, 4B; Para. 0013-15 and Du: Para. 0081-84, 0137); and wherein combining the plurality of video streams each essentially consisting of the representative image of the person with the selected video stream comprising the place and with the selected video stream comprising the thing into the synthesized video stream of the virtual scene for transmission to an end-user device via which to display the synthesized video stream of the virtual scene (Su: Figures 1, 3, 4B; Para. 0013-15 and Du: Para. 0081-84, 0137), comprises combining the plurality of video streams each essentially consisting of the representative image of the person with the selected video stream comprising the place and with the selected video stream comprising the advertisement into the synthesized video stream of the virtual scene for transmission to an end-user device via which to display the synthesized video stream of the virtual scene (Su: Figures 1, 3, 4B; Para. 0013-15 and Du: Para. 0081-84, 0137). Claim(s) 15-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Su et al, Du et al, in view of Perez et al (US PG Pub No. 2013/0293530). Regarding claims 15 and 19, Su and Du, the combination teaches transmitting to the end-user device the synthesized video stream of the virtual scene; displaying via the end-user device the synthesized video stream of the virtual scene (Su: Figures 1, 3 and 4B; Abstract, Para. 0013-15 and Du: Figures 3A, 4A). The combination is unclear with respect to tracking movement of an eye of a user viewing; detecting an extent to which the user viewing based on tracking the movement of the eye of the user. In similar field of endeavor, Perez et al teaches tracking movement of an eye of a user viewing; detecting an extent to which the user viewing based on tracking the movement of the eye of the user (Figures 10A, 14 and 15, Para. 0058, 0130, 0145). Therefore, it would have been obvious to one of ordinary skill in the art to modify the combination before the effectively filing date of the claimed invention for the common knowledge purpose of providing additional information to viewers on advertisement that are interesting to them to generate more revenue. Regarding claim 16, Su, Du and Perez, the combination teaches reporting to an advertiser the extent to which the user viewing the displayed synthesized video stream of the virtual scene is viewing the advertisement in the displayed synthesized video stream (Su: Figures 1, 3 and 4B; Du: Para. 0081, 0085, 0138-140, 0152; Perez: Figures 11, 15; Para. 0136). Regarding claim 17, Su, Du and Perez, the combination teaches displaying via the end-user device, a pop- up window, responsive to the extent to which the user viewing the displayed synthesized video stream of the virtual scene is viewing the advertisement in the displayed synthesized video stream (Su: Figures 1, 3 and 4B; Du: Para. 0081, 0085, 0138-140, 0152; Perez: Para. 0150). Regarding claim 18, Su, Du and Perez, the combination teaches displaying via the end-user device, the pop-up window, responsive to the extent to which the user viewing the displayed synthesized video stream of the virtual scene is viewing the advertisement in the displayed synthesized video stream, comprises displaying via the end-user device, a pop-up window (Su: Figures 1, 3 and 4B; Du: Para. 0081, 0085, 0138-140, 0152 and Perez: Para. 0150). The reference is unclear with respect to presents one or more of a query to confirm user interest in the advertisement, a query to obtain information about the user, a promotional offer, a link to a webpage that presents further information, and a hyperlink to select to conduct an online purchase of a product relating to the advertisement. However, the examiner takes official notice that both concepts and advantages are well known and expected in the art. It would have been obvious to one of ordinary skill in the art to modify the combination by specifically presenting one or more of a query to confirm user interest in the advertisement, a query to obtain information about the user, a promotional offer, a link to a webpage that presents further information, and a hyperlink to select to conduct an online purchase of a product relating to the advertisement before the effectively filing date of the claimed invention for the common knowledge purpose of providing additional information to viewers on advertisement that are interesting to them to generate more revenue. Claim 20 corresponds to claims 16-18. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KUNAL LANGHNOJA whose telephone number is (571)270-3583. The examiner can normally be reached M-F: 9:00AM - 5:00PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at (571) 272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KUNAL LANGHNOJA/Primary Examiner, Art Unit 2425
Read full office action

Prosecution Timeline

May 15, 2024
Application Filed
May 20, 2025
Response after Non-Final Action
Mar 19, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604063
SYSTEMS AND METHODS FOR CUSTOMIZING A MEDIA PROFILE PAGE
2y 5m to grant Granted Apr 14, 2026
Patent 12593086
SERVER, INFORMATION PROCESSING SYSTEM, STORAGE MEDIUM, AND TRANSMISSION METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12587696
PROCESSING A VIDEO SUBMISSION PACKAGE FOR GOING LIVE ON A MEDIA PLATFORM
2y 5m to grant Granted Mar 24, 2026
Patent 12568263
DYNAMIC SCHEDULING AND CHANNEL CREATION BASED ON EXTERNAL DATA
2y 5m to grant Granted Mar 03, 2026
Patent 12556775
DISPLAY APPARATUS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
43%
Grant Probability
68%
With Interview (+24.2%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 394 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month