DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after April 11, 2024, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-2 and 4-8 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Makker et. al (US 20210390953 A1, hereinafter Makker).
Regarding claim 1, Makker discloses a system for integrating personalized visual objects into a video stream during an online meeting, comprising: at least one meeting host device with an image capture component for capturing at least one image of a meeting host (“a first processor operatively coupled to a first media display and associated sensor disposed at a first location occupied by at least one first user” Makker [0009]); and at least one meeting participant device with a display component (“a second processor operatively coupled to a second media display disposed at a second location occupied by at least one second user” Makker [0009]) for displaying the at least one image of the meeting host to the meeting participant (“displaying, with the first media display at the first location, a media stream from the second processor communicated to the first media display” Makker [0009]); a meeting server which communicates between the at least one meeting host device and the at least one meeting participant device to host the online meeting (“via the communication link” Makker [0009]); and a visual object database and application server which generates and inserts at least one visual object into the video stream of the online meeting to display on the display screen of the at least one meeting participant device (“reproduces virtual objects devised to enhance an illusion that the remote user is integrated with the local environment” Makker [0088]).
Regarding claim 2, Makker discloses the system of claim 1, wherein the at least one visual object is generated for each meeting participant device based on the attributes of each meeting participant (“As another example, by including matching furnishings at multiple endpoints of a video conference (e.g., including a real table or desk in front of each real media display), each combined field of view for each respective participant (e.g., their view of their local environment combined with the virtual objects generated on their media display) can include the same matching desk or table on both sides of a video conference for creating a convincing telepresence illusion.” Makker [0103]).
Regarding claim 4, Makker discloses the system of claim 1, wherein the visual objects are inserted into a background image of the online meeting (“In some embodiments, virtual overlays are added to the displayed media stream that are configured to imitate the local environment (e.g., a ledge, a plant, or any other object). The virtual overlay object may match the aesthetics of the local environment. The virtual overlay may be added automatically and/or per user's request.” Makker [0104]).
Regarding claim 5, Makker discloses the system of claim 1, wherein the visual objects may be one or more of: a video, an animation (“In some embodiments, virtual overlays are added to the displayed media stream that are configured to imitate the local environment (e.g., a ledge, a plant, or any other object)” Makker [0104]), a set of digital images (GIFs), a single digital image, a QR code, 2-dimensional SVGs or 3-dimensional digital objects.
Regarding claim 6, Makker discloses the system of claim 1, wherein the visual objects are selected based on visual characteristics of a background image in the online meeting (“For example, a table or desk in the local environment placed in front of the media display may be oriented in an alignment that would extend into a plausible juxtaposition with the remote participant.” Makker [0103]).
Regarding claim 7, Makker discloses the system of claim 1, further comprising a virtual camera which generates a video stream with a background image for the meeting host in conjunction with the visual objects (“For example, a virtual extension of the virtual perspective add-on overlay object (e.g., plant, or furniture such as a table) to the virtual image of the remote participant may be added.” Makker [0103]).
Regarding claim 8, Makker discloses the system of claim 1, wherein the user can interact with the visual objects (“Media display 1630 also displays ledge perspective overlay 1611 and icons 1612 that facilitate voice and streaming control.” Makker [0133]; Fig. 16; Makker).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3 and 9-13 are rejected under 35 U.S.C. 103 as being unpatentable over Makker et. al (US 20210390953 A1, hereinafter Makker) in view of Copley et. al (US 20240022688 A1, hereinafter Copley).
Regarding claim 3, Makker discloses the system of claim 1.
Makker does not expressively teach “wherein the at least one visual object is generated based on the attributes of the meeting host.”
However, Copley does teach wherein the at least one visual object is generated based on the attributes of the meeting host (“visual data collected from one or more host devices 104 (e.g., including AR devices 304a and/or 304b) can be used to generate a partial or complete three-dimensional (3D) model of a host site (e.g., 301).” Copley [0059]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the immersive digital experience (as taught by Makker) with the teleconferencing apparatus (as taught by Copley). The rationale to do so is to combine prior art elements according to known methods to yield the predictable result of utilizing host attributes to generate a physical object that best aligns to the information discovered.
Regarding claim 9, Makker discloses a method for generating user-based visual objects in a web conferencing application, the method comprising the steps of: inserting the at least one visual object into the virtual background during a live web conferencing session (“In some embodiments, virtual overlays are added to the displayed media stream that are configured to imitate the local environment (e.g., a ledge, a plant, or any other object). The virtual overlay object may match the aesthetics of the local environment. The virtual overlay may be added automatically and/or per user's request.” Makker [0104]); and displaying the virtual background and the at least one visual object to at least one meeting participant on a display device (“In some embodiments, virtual overlays are added to the displayed media stream” Makker [0104]).
Makker does not expressively teach “identifying a plurality of attributes of a host of a web conferencing session; and generating at least one visual object based on the plurality of attributes of the host.”
However, Copley does teach identifying a plurality of attributes of a host of a web conferencing session (“visual data collected from one or more host devices 104 (e.g., including AR devices 304a and/or 304b) can be used to generate a partial or complete three-dimensional (3D) model of a host site (e.g., 301).” Copley [0059]); and generating at least one visual object based on the plurality of attributes of the host (“visual data collected from one or more host devices 104 (e.g., including AR devices 304a and/or 304b) can be used to generate a partial or complete three-dimensional (3D) model of a host site (e.g., 301).” Copley [0059]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the immersive digital experience (as taught by Makker) with the teleconferencing apparatus (as taught by Copley). The rationale to do so is to combine prior art elements according to known methods to yield the predictable result of utilizing host attributes to generate a physical object that best aligns to the information discovered.
Regarding claim 10, Makker, in view of Copley, discloses The method of claim 9, further comprising generating the at least one visual object for each meeting participant device based on attributes of each meeting participant (“As another example, by including matching furnishings at multiple endpoints of a video conference (e.g., including a real table or desk in front of each real media display), each combined field of view for each respective participant (e.g., their view of their local environment combined with the virtual objects generated on their media display) can include the same matching desk or table on both sides of a video conference for creating a convincing telepresence illusion.” Makker [0103]).
Regarding claim 11, Makker, in view of Copley, discloses the method of claim 9, wherein the visual objects may be one or more of: a video, an animation (“In some embodiments, virtual overlays are added to the displayed media stream that are configured to imitate the local environment (e.g., a ledge, a plant, or any other object)” Makker [0104]), a set of digital images (GIFs), a single digital image, a QR code, 2-dimensional SVGs or 3-dimensional digital objects.
Regarding claim 12, Makker, in view of Copley, discloses the method of claim 9, further comprising generating the at least one visual object based on visual characteristics of a background image in the online meeting (“For example, a table or desk in the local environment placed in front of the media display may be oriented in an alignment that would extend into a plausible juxtaposition with the remote participant.” Makker [0103]).
Regarding claim 13, Makker, in view of Copley, discloses the method of claim 9, wherein the attributes of the host may include one or more of: user settings and attributes (“For example, a meeting presenter may have content manipulation rights of his presented content, whereas a non-presenter may not…The manipulation right prescription can be visible and/or manipulable via an app. The manipulation right prescription may be presented on the media display, e.g., during presentation (e.g., in a dropdown menu and/or screen).” Makker [0108]), topics, content, and subjects discussed in the web conferencing session.
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Makker et. al (US 20210390953 A1, hereinafter Makker) in view of Copley et. al (US 20240022688 A1, hereinafter Copley), and further in view of Roper (US 20230126108 A1).
Regarding claim 14, Makker, in view of Copley, discloses the method of claim 9.
Makker, in view of Copley, does not expressively teach “further comprising displaying the virtual background and the at least one visual object to at least one meeting participant on a display device for a predetermined frequency, pace or duration in accordance with attributes of the host or a visual object owner.”
However, Roper does teach further comprising displaying the virtual background and the at least one visual object to at least one meeting participant on a display device for a predetermined frequency, pace or duration in accordance with attributes of the host or a visual object owner (“determine a prior selected virtual background and an elapsed time or an elapsed number of meetings; and wherein the first virtual background is selected to be different from the prior selected virtual background based on the elapsed time or the elapsed number of meetings.” Roper [0117]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the immersive digital experience (as taught by Makker), in view of Copley, with the context-changing virtual background application (as taught by Roper). The rationale to do so is to combine prior art elements according to known methods to yield the predictable result of replacing the need for users to change virtual backgrounds manually with an automated approach, thereby reducing the load of users and providing them with a more seamless, realistic experience.
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Makker et. al (US 20210390953 A1, hereinafter Makker) in view of Copley et. al (US 20240022688 A1, hereinafter Copley), and further in view of Jin et. al (US 20230299988 A1, hereinafter Jin).
Regarding claim 15, Makker, in view of Copley, discloses the method of claim 9.
Makker, in view of Copley, does not expressively teach “further comprising analyzing the effectiveness of the visual objects with regard to interaction by the meeting participants.”
However, Jin does teach further comprising analyzing the effectiveness of the visual objects with regard to interaction by the meeting participants (“The smart background engine 1102 continuously analyzes the media stream of the collaboration session and uses participant feedback, actions, interactions, and audio feed to translate elements that are contextually relevant into actionable objects and to replace actionable objects that are not of interest to the participants back into elements.” Jin [0088]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the immersive digital experience (as taught by Makker), in view of Copley, with the dynamic background application (as taught by Jin). The rationale to do so is to combine prior art elements according to known methods to yield the predictable result of replacing the need for users to utilize user insight to better tailor videoconferencing to user preference.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAAD AHMED SYED whose telephone number is (571) 272-6777. The examiner can normally be reached Monday - Friday 8:30 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen, can be reached at (571) 272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SAAD AHMED SYED/ Examiner, Art Unit 2691
/DUC NGUYEN/ Supervisory Patent Examiner, Art Unit 2691