DETAILED ACTION
This Office Action is sent in response to Applicant's Response received 01/13/2026 for 18433081. Claims 1-5, 7-11, and 13 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Response to Arguments
In view of Applicant's amendments, the objection of the drawings with respect to claim 6; reference character 304, 306, 308, 310; and Figure 8 has been withdrawn.
However, the specification continues to recite reference character 512 not included in the Drawings filed 03/08/2024 and 01/13/2026. The objection of the drawings with respect to reference character 512 is maintained.
In view of Applicant's amendments, the objection of the specification has been withdrawn.
In view of Applicant's amendments, the objection of claims 3, 6, 8, 9, 12, and 13 have been withdrawn.
In view of Applicant's amendments, the 112 rejection of claims 1-14 have been withdrawn.
Applicant's arguments with respect to the 102 rejection of claim 13 have been fully considered but are not persuasive in view of the new and/or updated citations used in the current rejection of record under Han in response to the newly amended limitations, including at least instructions in a file data structure for the immersive backgrounds including seat assignment, region, priority, and location as disclosed in Han [para 0048-0050].
Applicant acknowledges that Han discloses rendering participant images within a virtual scene [pgs. 14:7-15:2] but that Han does not teach or disclose "a plurality of virtual space background objects, each of which includes a plurality of seats and is associated with the seat arrangement metadata" [pgs. 14:7-15:1]. Examiner respectfully disagrees.
In this case, Applicant's specification does not disclose a special definition of "seat", merely an example of how one is formed, "it is necessary to express seats in perspective by making the rear seat or desk smaller than the front seat or desk and also adjust the perspective of the participant image" [Specification, pg. 2:2]. Thus Applicant appears to argue a "seat" is one which is a desk [Specification, pg. 2:2] or discrete organizing unit [Arguments, pg. 15:2]. These definitions of "seat" are not recited in the claim. The broadest reasonable interpretation of "seat" includes a seating arrangement.
Han teaches dynamic immersive background including scenes with locations and features where participant video streams may be placed, where an immersive background includes an image with predefined locations of rows of theatre seating and image features including at least desks or chairs [para 0040-0041]. Therefore, when instructions for displaying an immersive background include a data structure with respect to the immersive background [para 0048-0049], Han teaches "a plurality of virtual space background objects, each of which includes a plurality of seats and is associated with the seat arrangement metadata".
Applicant also argues that Han does not teach or disclose "seat arrangement metadata that is mapped or embedded to the virtual space background object and represents, for each seat, seat information including a seat size, a seat level, and seat location coordinates" [pgs. 14:7-15:1]. Examiner respectfully disagrees.
Han teaches instructions including a file identifying individual seat positions, priorities, and seat order with respect to an individual immersive background [para 0048-0049]. Therefore, when a device assigns participant video streams to specific virtual seats based on seat information included in the file [para 0049-0050], the file of Han teaches "seat arrangement metadata that is mapped or embedded to the virtual space background object and represents, for each seat, seat information including a seat size, a seat level, and seat location coordinates".
In response to Applicant's argument that the references fail to show certain features of Applicant’s invention, it is noted that the features upon which Applicant relies (i.e., "discrete seats as organizing units" [pg. 15:2]; "underlying data structures", "rules used to determine participant placement" [pg. 15:4]) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Claim 13 remains rejected.
Applicant's arguments with respect to the 103 rejection of claim 1 have been fully considered but are not persuasive.
With respect to Applicant's arguments that Han, alone or in combination, does not teach "a plurality of virtual space background objects, each of which includes a plurality of seats and is associated with seat arrangement metadata" as similarly recited in claim 13, Examiner responds to each of these arguments as above.
In response to Applicant's argument that the references fail to show certain features of Applicant’s invention, it is noted that the features upon which Applicant relies (i.e., "server-side automatic arrangement of participant images", "consistent participant placement", "protocol-aware seating", "elimination of manual rearrangement in large-scale conferences" [pg. 16:3]; "decomposing a virtual background" [pg. 16:4]; the virtual background "as a structured object having a seat-base data model" [pg. 17:1]) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
With respect to Applicant's arguments that the examiner’s conclusion of obviousness is based on improper hindsight reasoning [pg. 17:1], “[a]ny judgment on obviousness is in a sense necessarily a reconstruction based on hindsight reasoning, but so long as it takes into account only knowledge which was within the level of ordinary skill in the art at the time the claimed invention was made and does not include knowledge gleaned only from applicant’s disclosure, such a reconstruction is proper.” In re McLaughlin, 443 F.2d 1392, 1395, 170 USPQ 209, 212 (CCPA 1971) [see MPEP 2145]. In this case, the Office Action cites the combination of the prior art element of utilizing dynamic immersive backgrounds in a video conferencing environment as described in Han with the prior art element of presenting and selecting from a plurality of preconfigured virtual seating charts as described in Powell. Powell provides a motivation to allow participant engagement with virtual collaboration sessions to facilitate interactions between participants in the collaboration sessions [Powell, para 0034-0035]. Thus, the Office Action presents a prima facie case of obviousness and has not employed hindsight in combining Han and Powell as proposed.
Claim 1 remains rejected over Han in view of Powell
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description: reference character 512 [pg. 26:2].
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claim 13 is rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
Claim 13 recites "the participants from the conference host terminal device" which is unclear if "the participants" refers to the "multiple participants" connecting to a video conference using their own terminal devices or other participants of the conference host terminal device and has been interpreted as "[[the]] --a-- participant[[s]] from the conference host terminal device".
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 13 is/are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Han et al. (US 20220353438 A1).
As to claim 13, Han discloses a video conference setting and executing method, which is a method for a conference host terminal device to set and execute a video conference conducted by a group communication server to which multiple participants are able to be connected using their own terminal devices [Fig. 1, para 0021-0022, video conference provider (read: group communication server) connects users (read: participants) of client devices (read: terminal devices)], comprising:
transmitting a request to proceed with a conference mode in which participant images are arranged on a selected virtual space background object based on seat arrangement metadata that is mapped or embedded to the virtual space background object and represents, for each seat, seat information including a seat size, a seat level, and seat location coordinates to the group communication server [para 0040, 0046-0050, host indicates (read: request) to video conference provider conference to be displayed with immersive scene (read: virtual space background object, note broadest reasonable interpretation of object includes anything materially perceivable element) with seating selected by host, where displaying scene includes rendering video stream of client user (read: participant image) placed (read: arranged) on scene based on received instruction data structure (read: seat arrangement metadata) for (read: mapped to) immersive background including metadata properties including individual seat region (read: size), priority (read: seat level), and pixel location];
receiving a plurality of virtual space background objects, each of which includes a plurality of seats and is associated with the seat arrangement metadata, from the group communication server through a network [para 0021, 0039-0040, 0047-0049, 0054, video conference provider provides stored set of immersive backgrounds (read: virtual space background objects) over network to host, where background utilizes file data structure], and
transmitting information on a virtual space background object selected by a host [Fig. 3, para 0039-0040, 0046-0047, distribute immersive background (read: virtual space background object) selected by host];
obtaining and encoding an image and audio of the participants from the conference host terminal device using a camera and microphone [para 0026, 0036-0037, 0061, encrypt video and audio stream of host participant using client device captured with camera input device and microphone] and
transmitting encoded images and audio to the group communication server [para 0036-0037, 0046-0047, provide encrypted client user video and audio streams to video conference provider];
receiving the selected virtual space background object, multiple participant images, and composite audio from the group communication server [para 0046-0047, video conference provider forwards selected background and communicates encrypted client user video streams and audio streams]; and
overlaying and outputting the multiple participant images on the selected virtual space background object based on the seat arrangement metadata [Figs. 2-3, para 0040-0042, 0045, 0048-0050, client devices display immersive scene with client user video streams placed in background based on background instruction data structure including seat assignment order, note Figure 2 shows overlaying participant images on selected virtual space background object with background viewable through transparent backgrounds of user video streams].
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-5 and 7-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Han in view of Powell et al. (US 20230028265 A1).
As to claim 1, Han discloses a video conference support method, which is performed by a group communication server to which multiple participants are able to be connected using their own terminal devices [Fig. 1, para 0021-0022, video conference provider (read: group communication server) connects users (read: participants) of client devices (read: terminal devices)], comprising:
… a plurality of virtual space background objects, each of which includes a plurality of seats and is associated with seat arrangement metadata that is mapped or embedded to the virtual space background object, the seat arrangement metadata representing, for each seat, seat information including a seat size, a seat level, and seat location coordinates, to a host terminal device through a network [para 0021, 0024, 0039-0040, 0045, 0047-0050, 0054, host using client device connected to communication work selects from set of immersive backgrounds (read: virtual space background objects, note broadest reasonable interpretation of object includes anything materially perceivable element) with seating, where instruction data structure (read: seat arrangement metadata) to render video streams in (read: mapped to) background include seat region (read: size), priority (read: seat level), and pixel location], and
receiving information on a selected virtual space background object from the host terminal device [Figs. 3-4, para 0044-0045, 0047-0048, 0054, receive host selection of background from set of immersive backgrounds];
obtaining participant images each from the terminal devices [Fig. 3, para 0043, receive user video streams from corresponding client devices];
constructing a composite image by arranging all the participant images on the selected virtual space background object based on the seat arrangement metadata and a predetermined participant arrangement order that is applied based on the seat arrangement metadata [Figs. 3-4, para 0048-0050, 0055, render immersive scene (read: composite scene) by placing (read: arranging) all user video streams displayed at background regions based on instruction data structure metadata and seat assignment order (read: predetermined participant arrangement order) included in instruction data structure]; and
encoding the selected virtual space background object [], and the participant images and audio [para 0036-0037, 0046, encrypt conference content including background and client video and audio streams],
transmitting encoded data to the terminal devices [para 0036-0037, 0046-0047, communicate encrypted conference content between client devices], and
causing the terminal devices to overlay and output the participant images on the selected virtual space background object [Figs. 2-3, para 0040-0042, 0045, 0050, client devices display immersive scene with client user video streams placed in background, note Figure 2 shows overlaying participant images on selected virtual space background object with background viewable through transparent backgrounds of user video streams].
However, Han does not specifically disclose presenting a plurality of virtual space background objects and the selected virtual space background object constituting the composite image.
Powell discloses
presenting a plurality of virtual space background objects [Figs. 8-9, para 0054-0055, user interface displays seating charts defining virtual collaboration session layout, note seats in displayed seating chart layout are overlaid with participant names and thus falls under broadest reasonable interpretation of background], and
the selected virtual space background object constituting the composite image [Fig. 9, para 0054-0055, 0086-0087, 0089, display virtual collaboration session with seating chart overlaid with participant names].
Han and Powell are analogous art to the claimed invention being from a similar field of endeavor of video conferencing systems. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the virtual space background objects and the selected virtual space background object as disclosed by Han with presenting background objects and a background object constituting a composite image as disclosed by Powell with a reasonable expectation of success.
One of ordinary skill in the art would be motivated to modify Han as described above to facilitate interaction in virtual collaboration sessions [Powell, para 0034].
As to claim 2, Han discloses the method of claim 1, further comprising, before performing … to the host terminal device through the network and the receiving of the information on the selected virtual space background object from the host terminal device, receiving a request to proceed with a conference mode from a predetermined conference host terminal device [Figs. 2-3, para 0039-0040, 0047, host using client device (read: predetermined conference host terminal device) indicates conference to be displayed with immersive scene (read: high-immersion conference mode), note display of video stream display without immersive scene before host indicates display of immersive scene and provider selects from set of background after enabling immersive scene].
However, Han does not specifically disclose performing the presenting of the plurality of virtual space background objects to the host terminal device through the network.
Powell discloses performing the presenting of the plurality of virtual space background objects to the host terminal device through the network [Figs. 8-9, para 0043-0044, 0054-0055, user interface displays seating charts defining virtual collaboration session layout for selection by host participant using client device via network].
Han and Powell are analogous art to the claimed invention being from a similar field of endeavor of video conferencing systems. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the performing as disclosed by Han with performing presentation of background objects to a host device as disclosed by Powell with a reasonable expectation of success.
One of ordinary skill in the art would be motivated to modify Han as described above to facilitate interaction in virtual collaboration sessions [Powell, para 0034].
As to claim 3, Han discloses the method of claim 2, further comprising receiving, from the predetermined conference host terminal device, and determining the predetermined participant arrangement order from the predetermined conference host terminal device after receiving the request to proceed with the conference mode [Fig. 3, para 0040, 0048-0049, receive background instructions including seat assignment from host after indicating conference to be displayed with immersive scene].
As to claim 4, Han discloses the method of claim 1, wherein, in the encoding, and transmitting, the selected virtual space background object, the participant images, and the audio are individually encoded to generate the encoded data, and the encoded data is transmitted to the terminal devices [para 0036-0037, 0046-0047, encrypt conference content including background and client video and audio streams and communicate (read: transmit) encrypted conference content between client devices, where video and audio streams are both (read: individually) encrypted].
As to claim 5, Han discloses the method of claim 1, wherein each of the plurality of virtual space background objects includes a plurality of seats where a seat size and a displayable user image size are determined differently in perspective [para 0040, 0045, 0048-0049, 0052, background includes seat locations at selected region positions (read: size) in background at which size of displayed user video stream changes with user perspective].
As to claim 7, Han and Powell, combined at least for the reasons above, Han discloses a group communication support device that allows multiple participants to conduct a video conference using their own terminal devices, comprising: a memory configured to store program instructions; and a processor that is connected to the memory and executes the program instructions stored in the memory, wherein, when the program instructions are executed by the processor, the program instructions allow the processor [Fig. 7, para 0061, device for video conferencing includes memory storing instructions executed by processor] to: perform limitations substantially similar to those recited in claim 1 and is rejected under similar rationale.
As to claims 8-11, Han and Powell, combined at least for the reasons above, disclose the device of claim 7 performing limitations substantially similar to those recited in claims 2-5, respectively, and are rejected under similar rationale.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Wilson et al. (US 20250200843 A1) generally teaches constructing a composite image by arranging images on a virtual space background object.
Ferreira, J. (Hands-On Microsoft Teams: A Practical Guide to Enhancing Enterprise Collaboration with Microsoft Teams and Microsoft 365) and Volpe, F. (Microsoft Teams Administration Cookbook: Quick Solutions for Administrators in the Modern Workplace) generally disclose providing immersive video conferencing experiences and defining scenes including seating properties.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LINDA HUYNH whose telephone number is (571)272-5240 and email is linda.huynh@uspto.gov. The examiner can normally be reached M-F between 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571) 272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LINDA HUYNH/Primary Examiner, Art Unit 2172