Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
1. This communication is responsive to the application filed 1/19/2024.
2. Claims 1-20 are pending in this application. Claims 1, 8 and 15 are independent claims. Claims 8-20 were withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected species, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 12/1/2025. This action is made Non-Final.
Double Patenting
3. Claims 1, 6 and 7 of this application is patentably indistinct from claims 1-4 of Application No. 18/432,803. Pursuant to 37 CFR 1.78(f), when two or more applications filed by the same applicant or assignee contain patentably indistinct claims, elimination of such claims from all but one application may be required in the absence of good and sufficient reason for their retention during pendency in more than one application. Applicant is required to either cancel the patentably indistinct claims from all but one application or maintain a clear line of demarcation between the applications. See MPEP § 822.
Present Invention
Application No. 18/432,803
1. A method, comprising: causing a virtual meeting user interface (UI) to be presented during a virtual meeting between a plurality of participants, the virtual meeting UI comprising a plurality of regions each corresponding to a video stream associated with one or more of the plurality of participants; determining, during the virtual meeting, that a background of a first region corresponding to a first video stream associated with a first participant of the plurality of participants is to be modified in the virtual meeting UI; identifying a first frame of the first video stream as a candidate for the background of the first region; generating, using a first generative artificial intelligence (AI) model and using the first frame as input to the first generative AI model, an enhanced background image; and for each of one or more second frames of the first video stream: generating a composite image by superimposing an image of the first participant depicted in a respective second frame of the one or more second frames of the video stream on the enhanced background image, and causing the composite image to be presented in the first region of the virtual meeting UI in place of the respective second frame.
6. The method of claim 1, wherein: identifying the first frame of the video stream occurs at a virtual meeting preparation phase of the virtual meeting; and generating the composite image occurs during a live phase of the virtual meeting.
7. The method of claim 1, wherein superimposing the image of the first participant on the enhanced background image is based on a location and a size of the image of the first participant with respect to the respective enhanced background image.
1. A method, comprising: determining that a background of a visual item corresponding to a video stream of a first client device of a participant of a virtual meeting is to be modified in a virtual meeting user interface (UI); identifying a first frame of the video stream as a candidate for the background of the visual item; and for each of one or more second frames of the video stream: generating a composite image by superimposing an image of a participant depicted in a respective second frame of the one or more second frames of the video stream on a background depicted in the first frame using a location and a size of the image of the participant with respect to the respective second frame; and causing the composite image to be presented in the virtual meeting UI on a second client device in place of the respective second frame.
2. The method of claim 1, wherein: the first frame comprises an image of the participant in an area of the first frame; the method further comprises modifying the first frame by: removing the image of the participant from the area; and using an artificial intelligence (AI) model to fill the area.
3. The method of claim 2, wherein the AI model comprises a diffusion model.
4. The method of claim 1, wherein: identifying the first frame occurs at a preparation phase of the virtual meeting; and generating the composite image occurs during a live phase of the virtual meeting.
Claim Rejections - 35 USC § 103
4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. Claim(s) 1-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Agrawal et al (“Agrawal” US 2024/0275911) in view of Yamaguchi (US Patent Application Publication No. 2025/0095246).
Regarding claim 1, Agrawal discloses a method, comprising:
causing a virtual meeting user interface (UI) to be presented during a virtual meeting between a plurality of participants, the virtual meeting UI comprising a plurality of regions each corresponding to a video stream associated with one or more of the plurality of participants (Fig.1, Paragraph 0031 participants windows);
determining, during the virtual meeting, that a background of a first region corresponding to a first video stream associated with a first participant of the plurality of participants is to be modified in the virtual meeting UI (Fig. 5A, Fig. 6A, Fig. 7A, Fig. 9 steps 906, 908, Fig. 10 steps 1004, 1006, Fig. 12 steps 1208, 1210, Paragraphs 0058, 0063, 0066, 0069, 0077, 0083, 0092, 0094, 0118 determining change in background);
identifying a first frame of the first video stream as a candidate for the background of the first region (Fig. 5A, Fig. 6A, Fig. 7A, Fig. 9 steps 902, 904, Fig. 10 step 1002, Fig. 12 steps 1202, 1204, 1206, Paragraphs 0033, 0063, 0069, 0075-0076, 0083, 0092, 0094, 0118 identifying current background); and
for each of one or more second frames of the first video stream:
generating a composite image by superimposing an image of the first participant depicted in a respective second frame of the one or more second frames of the video stream on the enhanced background image (Fig. 5B, Fig. 6B, Fig. 7B, Fig. 9 step 912, Fig. 10 steps 1008-1014, Fig. 12 step 1212, Paragraphs 0038, 0063, 0065-0067 substantially align location/position of first participant, 0078, 0080, 0083, 0093, 0095, 0119 modified live video with original background), and
causing the composite image to be presented in the first region of the virtual meeting UI in place of the respective second frame (Fig. 5B, Fig. 6B, Fig. 7B, Fig. 9 step 914, Fig. 10 step 1020, Fig. 12 step 1216, Paragraphs 0035, 0065-0066, 0079, 0081, 0083, 0093, 0097, 0118-0119 presenting modified live video with original background) (Paragraphs 0027-0128 for complete details).
Agrawal does not specifically teach a first frame and second frame of video. However, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to interpret the video frame current with background just before change in background as first frame and subsequent video frames of video with new background as second frames.
Additionally, Agrawal does not expressly disclose generating, using a first generative artificial intelligence (AI) model and using the first frame as input to the first generative AI model, an enhanced background image.
However, Yamaguchi discloses generating, using a first generative artificial intelligence (AI) model and using the first frame as input to the first generative AI model, an enhanced background image (Paragraphs 0004, 0009, 0037-0040, 0046-0059). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to modify Agrawal to use the AI model as taught by Yamaguchi in order to provide a more user-friendly interface that improves quality while also saving user time.
Regarding claim 2, Agrawal discloses wherein: the first frame comprises an image of the first participant located in an area of the first frame (Figs. 5A, 6A, 7A participant in first frame); the method further comprises modifying the first frame by: removing the image of the first participant from the area; and using a second generative AI model to fill the area (Paragraphs 0038, 0058 segmenting and cropping region of interest i.e. participant and aligning regions, Fig. 3 items 205, Fig. 11 item 110, 0016, 0042, 0044, 0055, 0058- 0059, 0063-0066, 0083, 0101-0102 using AI).
Regarding claim 3, Yamaguchi discloses wherein the second generative AI model comprises a diffusion model (Paragraphs 0004, 0009, 0037-0040, 0046-0059; diffusion model).
Regarding claim 4, Yamaguchi discloses wherein: generating the enhanced background image further comprises using a generative AI prompt as further input to the first generative AI model; and the generative AI prompt comprises a command for the first generative AI model to generate the enhanced background image with one or more image elements (Paragraphs 0004, 0009, 0035-0040, 0046-0059; generate image).
Regarding claim 5, Yamaguchi discloses further comprising obtaining the one or more image elements from the virtual meeting UI (Paragraphs 0004, 0009, 0037-0040, 0046-0059; foreground object).
Regarding claim 6, Agrawal discloses wherein: identifying the first frame of the video stream occurs at a virtual meeting preparation phase of the virtual meeting (Fig. 4, Paragraphs 0060-0062 configuring before using); and generating the composite image occurs during a live phase of the virtual meeting (Fig. 5B, Fig. 6B, Fig. 7B, Fig. 9 step 912, Fig. 10 steps 1008-1014, Fig. 12 step 1212, Paragraphs 0038, 0063, 0065-0066, 0078, 0080, 0083, 0095, 0119 modified live video with original background).
Regarding claim 7, Agrawal discloses wherein superimposing the image of the first participant on the enhanced background image is based on a location and a size of the image of the first participant with respect to the respective enhanced background image (Figs. 5B, 6B, 7B location and size of the participant image with respect to the second frame).
Conclusion
6. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Fedyk et al (US 2025/0254269).
7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RASHAWN N TILLERY whose telephone number is (571)272-6480. The examiner can normally be reached M-F 9:00a - 5:30p.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William L Bashore can be reached at (571) 272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RASHAWN N TILLERY/ Primary Examiner, Art Unit 2174