DETAILED ACTION
Response to Amendment
The applicant’s submitted the response, 12/15/2026, was received and entered. No claims were cancelled. No claims were amended. No new claims were added. Therefore, claims 1-20 are pending in this application at this time.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-2, 4, 7, 10-11, 15-16 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Krol et al. (US 11,741,652) in view of Oz et al. (US 2022/0286657)(Krol et al. and Oz et al, were cited in the previous Office Action).
Regarding claim 1, Krol et al. (“hereinafter “Krol”) teaches a method, comprising:
capturing a video of a first participant with a first camera of a first client device (i.e., capturing a video stream 602, as shown in figure 6A; col.12, lines 15-20);
removing a background of the video to create a backgroundless video (i.e., physical surroundings 606 being excluded or removed by using computer vision techniques; col.12, lines 49-50);
transferring the video or the backgroundless video to a second client device (i.e., the video stream 602 being sent, over a network, to another device belonging to another user; col.13, lines 32-36); and
displaying, by the second client device, the backgroundless video combined with a background image, wherein an orientation of the backgroundless video to the background image (i.e., a second device of a second user can render the video stream as the avatar of the first user and a solid-colored background 606, as shown in figure 6B; col.13, lines 36-58).
It should be noticed that Krol failed to clearly teach the features of detecting a pose of a face of a second participant with a second camera of the second client device; and displaying, by the second client device, the backgroundless video combined with a background image, wherein an orientation of the backgroundless video to the background image is based on the pose. However, Oz et al. (hereinafter “Oz”) teaches a system and a method for conducting a three-dimensional (3D) video conference between multiple participants. Oz further teaches a video conference call between different users, as shown in figure 5, wherein user is provided with a view of one or more other users (para. [0132]). Oz further teaches each of the users associated with user devices, as shown in figure 3. Each of the user devices comprises a camera for capturing video streams and detecting or tracking movements of the user, such as detecting the movement of the user’s face or head (i.e., detecting a pose of a face, etc.) (para. [0124] and [0134]-[0135]). Oz further teaches the feature of displaying the video streams of the users in virtual meeting environment based on the user’s head, etc. from a certain point of view as updated panoramic view or a new view, as shown at bottom of figure 5 (para. [0136]-[0137] and [0144]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the features of detecting a pose of a face of a second participant with a second camera of the second client device; and displaying, by the second client device, the backgroundless video combined with a background image, wherein an orientation of the backgroundless video to the background image is based on the pose, as taught Oz, into view of Krol in order to provide the update video stream according to the detected movement or changed location of face of the participant.
Regarding claim 2, Krol further teaches the limitations of the claim in col.4, lines 9-27.
Regarding claim 4, Krol further teaches the limitations of the claim in col. 13, lines 32-36.
Regarding to claim 7, Oz further teaches the feature of the claim, such as tracking the user’s head pose and eye gaze, such as detecting the user looks at the right side (the user rolling his or her head to the right side in order to look at the other different user in the meeting; para. [0136]-[0137]).
Regarding to claim 10, Krol further teaches the features of the claim, such as a server receives information 422A, video stream 424A, audio streams, etc. from device 306A and transmits the same information 422A, video stream 424A, audio streams, etc. to the other device 306B, as shown in figures 4B and 4C (col.9, lines 47-58).
Regarding claim 11, Krol teaches a non-transitory computer-readable medium storing instructions operable to cause one or more processors to perform operations (i.e., software stored in a memory of server 302 may provide executable information that instructs the devices 306A and 306B on how to render the data to provide the interaction conference; col.7, lines 30-40) comprising:
capturing a video of a first participant with a first camera of a first client device (i.e., capturing a video stream 602, as shown in figure 6A; col.12, lines 15-20);
removing a background of the video to create a backgroundless video (i.e., physical surroundings 606 being excluded or removed by using computer vision techniques; col.12, lines 49-50);
transferring the video or the backgroundless video to a second client device (i.e., the video stream 602 being sent, over a network, to another device belonging to another user; col.13, lines 32-36); and
displaying, by the second client device, the backgroundless video combined with a background image, wherein an orientation of the backgroundless video to the background image (i.e., a second device of a second user can render the video stream as the avatar of the first user and a solid-colored background 606, as shown in figure 6B; col.13, lines 36-58).
It should be noticed that Krol failed to clearly teach the features of detecting a pose of a face of a second participant with a second camera of the second client device; and displaying, by the second client device, the backgroundless video combined with a background image, wherein an orientation of the backgroundless video to the background image is based on the pose. However, Oz et al. (hereinafter “Oz”) teaches a system and a method for conducting a three-dimensional (3D) video conference between multiple participants. Oz further teaches a video conference call between different users, as shown in figure 5, wherein user is provided with a view of one or more other users (para. [0132]). Oz further teaches each of the users associated with user devices, as shown in figure 3. Each of the user devices comprises a camera for capturing video streams and detecting or tracking movements of the user, such as detecting the movement of the user’s face or head (i.e., detecting a pose of a face, etc.) (para. [0124] and [0134]-[0135]). Oz further teaches the feature of displaying the video streams of the users in virtual meeting environment based on the user’s head, etc. from a certain point of view as updated panoramic view or a new view, as shown at bottom of figure 5 (para. [0136]-[0137] and [0144]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the features of detecting a pose of a face of a second participant with a second camera of the second client device; and displaying, by the second client device, the backgroundless video combined with a background image, wherein an orientation of the backgroundless video to the background image is based on the pose, as taught by Oz, into view of Krol in order to provide the update video stream according to the detected movement or changed location of face of the participant.
Regarding claim 13, Krol further teaches the limitations of the claim in col.4, lines 9-27.
Regarding claim 16, Krol teaches a system (i.e., a system 300, as shown in figure 3) comprising:
one or more memories; and
one or more processors (i.e., server 302 or a web server) configured to execute instructions stored in the one or more memories (i.e., server 302 may provide executable information that instructs the devices 306A and 306B on how to render the data to provide the interaction conference; col.7, lines 30-40) to:
capturing a video of a first participant with a first camera of a first client device (i.e., capturing a video stream 602, as shown in figure 6A; col.12, lines 15-20);
removing a background of the video to create a backgroundless video (i.e., physical surroundings 606 being excluded or removed by using computer vision techniques; col.12, lines 49-50);
transferring the video or the backgroundless video to a second client device (i.e., the video stream 602 being sent, over a network, to another device belonging to another user; col.13, lines 32-36); and
displaying, by the second client device, the backgroundless video combined with a background image, wherein an orientation of the backgroundless video to the background image (i.e., a second device of a second user can render the video stream as the avatar of the first user and a solid-colored background 606, as shown in figure 6B; col.13, lines 36-58).
It should be noticed that Krol failed to clearly teach the features of detecting a pose of a face of a second participant with a second camera of the second client device; and displaying, by the second client device, the backgroundless video combined with a background image, wherein an orientation of the backgroundless video to the background image is based on the pose. However, Oz et al. (hereinafter “Oz”) teaches a system and a method for conducting a three-dimensional (3D) video conference between multiple participants. Oz further teaches a video conference call between different users, as shown in figure 5, wherein user is provided with a view of one or more other users (para. [0132]). Oz further teaches each of the users associated with user devices, as shown in figure 3. Each of the user devices comprises a camera for capturing video streams and detecting or tracking movements of the user, such as detecting the movement of the user’s face or head (i.e., detecting a pose of a face, etc.) (para. [0124] and [0134]-[0135]). Oz further teaches the feature of displaying the video streams of the users in virtual meeting environment based on the user’s head, etc. from a certain point of view as updated panoramic view or a new view, as shown at bottom of figure 5 (para. [0136]-[0137] and [0144]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the features of detecting a pose of a face of a second participant with a second camera of the second client device; and displaying, by the second client device, the backgroundless video combined with a background image, wherein an orientation of the backgroundless video to the background image is based on the pose, as taught Oz, into view of Krol in order to provide the update video stream according to the detected movement or changed location of face of the participant.
Regarding claims 18 and 19, Krol further teaches the limitations of the claims in col.4, lines 9-27.
Regarding to claim 20, Oz further teaches the feature of the claim, such as tracking the user’s head pose and eye gaze, such as detecting the user looks at the right side (the user rolling his or her head to the right side in order to look at the other different user in the meeting; para. [0136]-[0137]).
Claims 3, 5, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Krol et al. (US 11,741,652) in view of Oz et al. (US 2022/0286657) as applied to claims 1 and 11 above, and further in view of Slotznick (US 11.601,618, also cited in the previous Office Action).
Regarding claim 3, Krol and Oz, in combination, teaches all subject matters as claimed above, except for the feature of the background image being obtained from a storage server. However, Slotznick teach a communication system 1300, as shown in figure 13. The system 1300 includes a plurality of communication platforms for providing respective layers. Slotznick further teaches one of communication platforms, CP2-CP6, may store and/or generate the background layer for the devices of the participants (col.31, lines 10-20).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of the background image being obtained from a storage server, as taught by Slotznick, into view of Krol and Oz in order to provide the background image to devices of the participants.
Regarding claim 5, Slotznick further teaches the features of displaying the video streams with multiple layers as shown in figures 1C; 3B, 4B and 6A, etc. (col.7, lines 4-16; col.28, lines 20-40).
Regarding claim 12, Slotznick further teaches limitations of the claim, such as transmitting the background image from the first client to a cloud storage, further transmitting to the second client device to all of participants (col.14, line 50 col.15, line 19).
Claims 8-9 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Krol et al. (US 11,741,652) in view of Oz et al. (US 2022/0286657) as applied to claims 1 and 11 above, and further in view of Lindmark (US 2025/0078870, also cited in the previous Office Action).
Regarding claims 8, 9 and 15, Krol and Oz, in combination, teaches all subject matters as claimed above, except for features of performing each of a horizontal perspective transformation or vertical perspective transformation of the background image according to either a yaw or pitch of the pose. However, Lindmark teaches a system and a method of generating a 3D effect of a video stream based on movement, e.g., yaw or pitch, etc. of a viewing participant. Lindmark further teaches a server machine 150, as shown in figure 1, included 3D effect engine 151. The 3D effect engine 151 can dynamically modify a presentation position of a background layer of a video stream to produce a video stream with a modified background that provides a 3D effect for a viewing participant (para. [0042]). Lindmark further teaches positions or poses of the viewing participant’s head and/or eyes which can be tracked and detected by a local camera associated with the viewing participant. The detected positions or poses of viewing participant’s head and/or eyes are includes movements of the viewing participant looking to either right or left side, looking up, etc. as yaw or pitch poses so that portions of the background can become visible in the modified background layer (para. [0024]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the features of performing each of a horizontal perspective transformation or vertical perspective transformation of the background image according to either a yaw or pitch of the pose, as taught by Lindmark, into view of Krol and Oz in order to provide updated video with the video background on the movements of the viewing participant.
Claims 14 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Krol et al. (US 11,741,652) in view of Oz et al. (US 2022/0286657) as applied to claims 1 and 11 above, and further in view of Kim et al. (EP 3148184 submitted in the IDS dated 06/17/2025, also cited in the previous Office Action).
Regarding claim 14, Krol and Oz, in combination, teach all subject matters as claimed above, except for the feature of the background image includes distance information for at least one layer of the background image; and the orientation is further based on the distance information. However, Kim et al. (hereinafter “Kim”) teaches distance information or depth, as shown in figure 10 (para. [0085]-[0087]) for a purpose of displaying the video having background image changed according to the change of the location of the participant.
Regarding claim 17, Slotznick further teaches limitations of the claim, such as features of displaying the video streams with multiple layers as shown in figures 1C; 3B, 4B and 6A, etc. (col.7, lines 4-16; col.28, lines 20-40; col.15, lines 33-40 and col.16, lines 8-19). Kim teaches distance information or depth, as shown in figure 10 (para. [0085]-[0087]).
Allowable Subject Matter
Claim 6 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of copending Application No. 18/427,341 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the present application are broader in scope than the claims of the patent and/or recited in different words (In re KARLSON (CCPA) 136 USPQ 184 (1963)).
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Response to Arguments
Applicant's arguments filed 12/15/2026 have been fully considered but they are not persuasive.
Applicants argued, on page 7, that the combination of Krol and Oz fails to render claims 1, 11, and 16 obvious. Representative claim 1 is reproduced below for reference.
[1A] capturing a video of a first participant with a first camera of a first client device;
[1B] removing a background of the video to create a backgroundless video;
[1C] transferring the video or the backgroundless video to a second client device;
[1D] detecting a pose of a face of a second participant with a second camera of the second client device; and
[1E] displaying, by the second client device, the backgroundless video combined with a background image, wherein an orientation of the backgroundless video to the background image is based on the pose.
A/. In response to the Applicants’ arguments stated in the section:
“1. Krol Fails to Disclose or Suggest Limitation 1B, on pages 7-8 of the remarks, wherein the Applicants argued as following:
“…The present application describes a "backgroundless video" as one in which the background is removed, resulting in a transparent or foreground-only video suitable for multilayer compositing. See Spec., [0017] ("The 'backgroundless' video, which may also be referred to herein as a transparent video, or a foreground video."). In contrast, the cited passage of Krol describes a process in which the background of a user's video is segmented and replaced with a solid color using image processing techniques. While Krol uses the term "excluded," the full context-which is included in the sentences immediately following the cited passage-makes clear that this exclusion is achieved by overwriting the background pixels with a uniform color, not by actual removing the background to create transparency or a true foreground-only video as required by claim 1…”
The Examiner respectfully disagreed with the Applicants’ arguments above. First of all, the term “backgroundless”, as defined in the paragraph [0017] of the Specification as referred to a transparent video. Also, in paragraph [0069], a graphical display 617, figure 6, displays a video with four layers. A first layer is the remote speaker, which may be referred to a “foreground layer” of a video. The other layers are background layers, which are removed, for example, the Background Distance 1 (tree 562), Background Distance 2 (tree 564) and Background Distance n, as shown in figure 5, are removed. Therefore, the foreground 560 or the remote speaker layer 620 are a backgroundless (i.e., transparent or foreground) video of the remote speaker.
Also, the term “backgroundless” can be read on a background without physical surrounds a foreground in a video, such as the physical surrounds are made to be disappeared in the video, which was described in Krol. Krol teaches a video stream 602 capturing user 604 and the user’s physical surroundings 606, as shown in figure 6A. Krol further teaches the technique to remove, hide or cover the physical surroundings 606 by converting each pixel to a solid colored background, etc. so that the stream video can be viewed as backgroundless (a video or video stream without a background) with the foreground of user 604, as shown in figure 6B (col.12, lines 49-55).
Claim 1 failed to clearly recite the “backgroundless” as a transparent with only-foreground presented in the video. The term “backgroundless” and feature of removing background in a video can be interpreted as the feature of “…physical surroundings 606 are excluded entirely by using…” as described in Krol. Therefore, the feature of “…physical surroundings 606 are excluded entirely by using…by converting each pixel to a solid (white) colored background…”, so that viewers (viewing participants) see no background or backgroundless in the video stream, as shown in figure 6B, which read on the feature of “removing the background” or “backgroundless video” as recited in the claims. Applicants argued on something (i.e., transparent or layers of background being removed, etc.) not clearly recited in the claims.
B/. In response to the Applicants’ arguments stated in the section:
“2 Krol Fails to Disclose or Suggest Limitation 1C” on pages 8-9 of the remarks, wherein the Applicants argued as following:
“The third paragraph on page 3 of the Office Action alleges that col. 13, lines 32-36 discloses "transferring the video or the backgroundless video to a second client device." See Office Action… While limitation 1C recites transferring "the video or the backgroundless video" (emphasis added), the Office Action does not specify…”
The Examiner respectfully disagreed with the Applicants’ arguments above. Since the Applicants failed to clearly recite the “backgroundless” as the transparent and/or with only-foreground in the video. Therefore, the teaching of “excluding entirely the physical surroundings 606 by converting each pixel solid colored background” can read on the feature of “backgroundless” or “removing the background” in the video, so that the viewers can see no background in the video. Again, Applicants argued on something (i.e., transparent or layers of background being removed, etc.) not clearly recited in the claims.
C/. In response to the Applicants’ arguments stated in the section:
“3 Krol in view of Oz Fails to Disclose or Suggest Limitation 1E” on pages 10-11 of the remarks, wherein the Applicants argued as following:
“The Office Action appears to rely on several passages of Oz as disclosing the part of limitations 1E that recited “wherein an orientation of the backgroundless video…”
First, the cited paragraph [0124] of Oz described…This establishes that each participant in the system interacts through a computerized device capable of capturing video and processing user input, but does not address, teach, or suggest any aspect of orienting a backgroundless video relative to a background image based on a detected pose, as required by limitation 1E. The cited paragraphs [0134-0135] of Oz describe adjusting an orientation of a first user's avatar in a virtual environment to face a second user's avatar based on detecting, by gaze tracking, that the first user is looking at the second user's avatar. Detecting a user's gaze is different than detecting a pose of a face of such user, and further, rendering an avatar's orientation based on gaze tracking does not disclose or suggest displaying a backgroundless video combined with a background image, wherein an orientation of the backgroundless video to the background image is based on the pose…” (Emphasis added)
The Examiner respectfully disagreed with the Applicants’ arguments above. According to the paragraph [0076] of specification in the application, the paragraph described as following:
“…As shown in FIG. 9, a pose of a face may include a horizontal location (or X translation), a vertical location (or Y translation), a yaw, a pitch, or a roll. These components combine to yield a given spatial location of the eyes of the viewing participant, which is the ultimate determinant of the viewing participant's point of view with respect to the camera of the receiver side 720. The terms face pose, head pose, and eyes pose are used interchangeably herein, and are an example of a detected feature…” (emphasis added)
Thus, Oz teaches the feature of detecting a user’s gaze (i.e., detecting feature) of a second user’s avatar (i.e., eyes of the viewing participant). Thus, the user’s gaze is a eyes pose or the “detected pose” as recited in the claims. Also, Oz further teaches the original panoramic view 41, as shown in figure 5, comprising participants 51-55, each looked at cameras. Oz further teaches that the conference system detects a direction of gaze (as detected pose) of the first participant 55 (head turns) towards fifth participant 51. Then, the conference system oriented and updated panoramic view, as shown in figure 5, with a table 60 background. The head of avatar 55 is oriented so that he or she looked at the first participant (avatar 51). By combination of Krol and Oz, the updated panoramic view is displayed to participants the table 60 background. Therefore, Oz clearly teaches the feature of orienting a backgroundless video relative to a background image based on a detected pose.
D/. In response to the Applicants’ arguments stated in the section:
“4. No Clear and Particular Teaching or Suggestion to Motivate Combining” on pages 7-8 of the remarks, wherein the Applicants argued as following:
“…Office Action at 4 (emphasis added). However, this rationale is conclusory and lacks the specific, articulated reasoning required by law-it merely restates the claimed result, rather than identifying any actual motivation or suggestion in the prior art itself. Such a restatement of the claim's purpose, without more, is insufficient to satisfy the examiner's burden. The Federal Circuit has made clear that… The Office Action does not identify any clear and particular teaching or suggestion in the cited prior art that would have motivated a person of ordinary skill to combine the references in the manner required by the claims. Accordingly, the combination of Krol and Oz fails to render claim 1 obvious.”
The Examiner respectfully disagreed with the Applicants’ arguments above. In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, Krol teaches the feature of removing background in the stream video and transmitting the stream video without any background or stream video with backgroundless to the conference system. Oz teaches the stream video with backgroundless sent from multiple participants in a conference. Oz further teaches the feature of detecting one of the multiple participant looking at the other based on the detected direction of gaze or detected pose. Thus, Oz teaches the feature of reorienting the video streams, such as backgroundless video from teaching of Krol, received from one or more multiple participants, and combined other received videos with the table 60 background, to produce the updated panoramic view as shown in figure 5 in Oz.
With all remarks to the Applicants’ arguments as discussed above, Examiner believed that the interpretations to the terms “backgroundless video”, “removing the background”, “detected pose”, etc., as remarked and discussed above, are proper. Examiner also believed that the rejections as set forth in the previous Office Action, as well as, in this Final Office are proper and permissible. Therefore, the rejections to claims have been maintained.
A shortened statutory period for response to this final action is set to expire THREE MONTHS from the date of this action. In the event a first response is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event will the statutory period for response expire later than SIX MONTHS from the date of this final action.
Any response to this final action should be mailed to:
BOX AF
Commissioner of Patents and Trademarks
Washington, D.C. 20231
Or faxed to:
(703) 872-9314 or (301) 273-8300 (for formal communications;
Please mark “EXPEDITED PROCEDURE”)
Or: If it is an informal or draft communication, please label “PROPOSED” or “DRAFT”)
Hand Carry Deliveries to:
Customer Service Window
(Randolph Building)
407 Dulany Street
Alexandria, VA 22314
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BINH TIEU whose telephone number is (571)272-7510. The examiner can normally be reached on 9-5. The Examiner’s fax number is (571) 273-7510 and E-mail address: BINH.TIEU@USPTO.GOV.
Examiner interviews are available via telephone or video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, FAN S. TSANG can be reached on (571) 272-7547.
Any response to this action should be mailed or handed carry deliveries to:
Commissioner of Patents and Trademarks
401 Dulany Street
Alexandria, VA 22314
Or faxed to: (571) 273-8300
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (FAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the FAIR system, see fitp://nair-direct.usoto.aqev. If you have any questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/Binh Kien Tieu/Primary Examiner, Art Unit 2694
Date: February 2026