DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 1-20 are pending for examination.
Claims 1-20 are rejected under 35 U.S.C. §103.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 4-6, 8-9, 11, 13, 15-16, and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Baron (U.S. 2012/0050323 hereinafter Baron) in view of Barzuza et al. (U.S. 2017/0142371 hereinafter Barzuza) in further view of Seo et al. (U.S. 2014/0250406 hereinafter Seo).
As Claim 1, Baron teaches a method, comprising:
identifying a virtual background template for a conference (Baron (¶0028 line 1-5), background data includes a set of virtual background selection rule),
determining that contextual information of the conference includes the tag (Baron (¶0029 line 1-4 and last 8 lines), context information includes the tag such as participant’s name/position);
in response to determining that the contextual information of the conference includes the textual tag (Baron (¶0030 line 1-10), virtual background is selected based on virtual background selection rule); and
displaying the virtual background during the conference (Baron (¶0035 line 10-15, fig. 3), virtual background is display with user’s image data in figure 3).
Baron does not explicitly disclose:
wherein the virtual background template comprises a base layer defining a primary virtual background and a boundary layer configured to display media content within the virtual background boundary area at a particular location layered over the primary virtual background;
Barzuza teaches:
wherein the virtual background template comprises a base layer defining a primary virtual background and a boundary layer configured to display media content within the virtual background boundary area at a particular location layered over the primary virtual background (Barzuza (¶0078 line 1-7, fig. 3 item 304), background modifier divide image into two sets of pixel, first set of pixels corresponding to the local participant image and background information that is not to be replaced and second set of pixels corresponding to background information to be replaced with the selected template)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify background image logo of Baron instead be a background logo taught by Barzuza, with a reasonable expectation of success. The motivation would be to provide privacy at home, virtual roll-up, branding and more … (Barzuza (¶0074 last 4 lines)) (Teaching suggestion motivation).
Baron in view of Barzuza does not explicitly disclose:
wherein the virtual background template includes a placement of a virtual background boundary area and a textual tag directly input by a user into the virtual background boundary area via a user interface for configuring the virtual background template,
wherein a first size of the virtual background boundary area is smaller than a second size of the virtual background template; and
where in the first size and a location of the first virtual background area are determined based on a box that a user draws over a visual representation of the virtual background template, and
generating, in real-time for use in the conference, a virtual background using the virtual background template and the media displaying the media file identified based on the tag in a boundary area of the virtual background corresponding to the virtual background boundary area of the virtual background template, and layered over the primary virtual background; and
Seo teaches:
wherein the virtual background template includes a placement of a virtual background boundary area template (Seo (¶0103 line 4-7, fig. 12), user draws a rectangle to create a frame on the display) and a textual tag directly input by a user into the virtual background boundary area via a user interface for configuring the virtual background template (Seo (¶0101 line 5-10, fig. 10), “the user may execute the application and input an object "CAR" as shown in FIG. 10. Then the electronic device 100 may display the page 200 on which the corresponding object (e.g. text object of "CAR") is presented in response to the user input as shown in FIG. 10.”, user input a textual tag in as “CAR” text),
wherein a first size of the virtual background boundary area is smaller than a second size of the virtual background template (Seo (¶0103 line 4-7, fig. 12), user draws a rectangle to create a frame on the display); and
where in the first size and a location of the first virtual background area are determined based on a box that a user draws over a visual representation of the virtual background template (Seo (¶0103 line 4-7, fig. 12), user draws a rectangle to create a frame on the display), and
retrieving a media file from a plurality of media files based on matching the textual tag to a tag associated with the media file, wherein the matching comprises comparing the textual tag input into the virtual background boundary area with text identified in the contextual information of the conference (Seo (¶0105, ¶0106), “performs text recognition on the input object to generate a keyword (e.g. CAR) and retrieves the internal and/or external data matching the keyword according to a data acquisition mode. For example, the electronic device 100 may perform text recognition on the input object to acquire the keyword "CAR" and retrieve the data matching the keyword "CAR" from the internal and/or external data. The retrieved data is buffered and then processed so as to be presented in the frame area 600”);
generating, in real-time and for use in the conference, a virtual background using the virtual background template and the media file by displaying the media file identified based on the textual tag in a boundary area of the virtual background corresponding to the virtual background boundary area of the virtual background template, and layered over the primary virtual background (Seo (¶0104-¶0105, fig. 12), system performs text recognition on the keyword and displays the input object); and
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify background modification module of Baron in view of Barzuza instead be a background modifier taught by Seo, with a reasonable expectation of success. The motivation would be to provide a “more condensed client device steps, to reduce confusion and enhance user convenience” (Seo (¶0006 last 3 lines)) (Teaching suggestion motivation).
As Claim 2, besides Claim 1, Baron in view of Barzuza in further view of Seo teaches wherein identifying the media file based on the tag comprises: identifying that the media file is tagged with the tag (Baron (¶0029 line 1-4 and last 8 lines), context information includes the tag such as participant’s name/position. Baron (¶0030 line 1-10), virtual background is selected based on virtual background selection rule).
As Claim 3, besides Claim 1, Baron in view of Barzuza in further view of Seo teaches further comprising: receiving the tag as a textual input directly into the virtual background boundary area via a user interface for configuring of the virtual background template (Seo (¶0104 line 1-5, fig. 12), user inputs “Car” keyword on the display).
As Claim 4, besides Claim 1, Baron in view of Barzuza in further view of Seo teaches further comprising:
receiving an initial placement of the virtual background boundary area via a user interface for configuring of the virtual background template (Baron (¶0035 line 1-5), captured background is replaced with virtual background image data); and
setting the placement of the virtual background boundary area in the virtual background template based on an input received via the user interface (Baron (¶0036 line 6-9), user can request a change to virtual background).
As Claim 5, besides Claim 1, Baron in view of Barzuza in further view of Seo teaches:
genrating, in real-time and for use in the conference, the virtual background comprises:
displaying the media file over another virtual background boundary area described in the virtual background template (Seo (¶0104-¶0105, fig. 12, ¶0234, fig. 48), system performs text recognition on the keyword and displays the input object).
As Claim 6, besides Claim 1, Baron in view of Barzuza in further view of Seo teaches generating, in real-time and for use in the conference, the virtual background comprises:
displaying the virtual background boundary area according to a location, a shape, and a size of the virtual background boundary area included in the virtual background template (Baron (¶0034 last 10-15), virtual background replaces the captured background image).
As Claim 7, besides Claim 1, Baron in view of Barzuza in further view of Seo teaches:
wherein the virtual background template includes more than one virtual background boundary area that are layered (Seo (¶0219 line 7-8, fig. 45), plurality of subareas are displayed).
As Claim 8, Baron teaches a system, comprising:
a memory (Baron (¶0020 line 1-2), storage component); and
a processor, the processor configured to execute instructions stored in the memory (Baron (¶0020 line 1-2), one or more processors) to:
The rest of the Claim is rejected for the same reasons as Claim 1.
As Claim 9, the Claim is rejected for the same reasons as Claim 2.
As Claim 11, the Claim is rejected for the same reasons as Claim 4.
As Claim 13, the Claim is rejected for the same reasons as Claim 6.
As Claim 15, the Claim is rejected for the same reasons as Claim 1.
As Claim 16, Baron in view of Barzuza in further view of Seo teaches:
matching a tag associated with the media file to the tag the textual tag is compared with at least one of a meeting title, participant email addresses, or meeting description (Seo (¶0101 line 5-10, fig. 10), “the user may execute the application and input an object "CAR" as shown in FIG. 10. Then the electronic device 100 may display the page 200 on which the corresponding object (e.g. text object of "CAR") is presented in response to the user input as shown in FIG. 10.”, user input a textual tag in as “CAR” text).
As Claim 18, the Claim is rejected for the same reasons as Claim 4.
As Claim 19, the Claim is rejected for the same reasons as Claim 5.
As Claim 20, besides Claim 1, Baron in view of Barzuza in further view of Seo teaches wherein the virtual background template and the media file are stored as a collective container (Baron (¶0025 line 10-24, ¶0028 line 1-2), background data includes both background template (rule) and multiple virtual backgrounds).
Claim(s) 3, 7, 10, 12, 14 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Baron and Barzuza in view of Seo in further view of Mochizuki (U.S. 2003/0001948 hereinafter Mochizuki).
As Claim 3, besides Claim 1, Baron in view of Barzuza in further view of Seo may not explicitly disclose further comprising:
receiving the tag as a textual input directly into the virtual background boundary area via a user interface for configuring of the virtual background template wherein the textual tag is stored in association with the virtual background template in a data storage device prior to the conference.
Mochizuki teaches:
receiving the tag as a textual input directly into the virtual background boundary area via a user interface for configuring of the virtual background template wherein the textual tag is stored in association with the virtual background template in a data storage device prior to the conference (Mochizuki (¶0087 last 4 lines), This kind of a music arrangement method and a combination of the foreground (dancing by animated persons) and the background (landscape)
are specified in advance by a user's selection).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify background modification module of Baron in view of Barzuza in further view of Seo instead be a background modifier taught by Mochizuki, with a reasonable expectation of success. The motivation would be to “provide a content distribution system that can offer a user not only a selection of a content work but also a wider selection of a content element” (Mochizuki (¶0012 line 3-5)).
As Claim 7, besides Claim 1, Baron in view of Barzuza in further view of Seo may not explicitly disclose:
wherein the virtual background template includes more than one virtual background boundary area that are layered a plurality of virtual background boundary areas including the virtual background boundary area and at least one additional virtual background boundary area, wherein each virtual background boundary area of the plurality of virtual background boundary areas has a respective textual tag directly input by the user into the respective virtual background boundary area via the user interface,
wherein determining that the contextual information of the conference includes the textual tag comprises:
determining that the contextual information includes at least one textual tag of the respective textual tags associated with the plurality of virtual background boundary areas;
wherein retrieving the media file comprises:
retrieving the plurality of media files, each media file corresponding to one of the plurality of virtual background boundary areas based on matching the respective textual tag; and
wherein displaying the media file comprises:
displaying each of the plurality of media files in a corresponding boundary area of the virtual background, wherein different media files are displayed in different boundary areas based on matching respective textual tags.
Mochizuki teaches:
wherein the virtual background template includes more than one virtual background boundary area that are layered a plurality of virtual background boundary areas including the virtual background boundary area and at least one additional virtual background boundary area, wherein each virtual background boundary area of the plurality of virtual background boundary areas has a respective textual tag directly input by the user into the respective virtual background boundary area via the user interface (Mochizuki (¶0099 line 1-5, ¶0103 line 3-10, ¶0056 line 8-16, fig. 3, fig. 10), user select background video material, foreground video material and sound material to include in the content. User edits screen layout by moving virtual video material. Plurality of background video materials are selected in figure 3. Figure 10 suggests that video materials are displayed in rectangular with different sizes),
wherein determining that the contextual information of the conference includes the textual tag comprises:
determining that the contextual information includes at least one textual tag of the respective textual tags associated with the plurality of virtual background boundary areas (Mochizuki (¶0090, ¶0088 line 3-9, fig. 10), plurality of background videos are selected for screen content. Figure 10 suggests that video materials are displayed in rectangular with different sizes);
wherein retrieving the media file comprises:
retrieving the plurality of media files, each media file corresponding to one of the plurality of virtual background boundary areas based on matching the respective textual tag (Mochizuki (¶0090, ¶0088 line 3-9, fig. 10), plurality of background videos are selected for screen content. Figure 10 suggests that video materials are displayed in rectangular with different sizes); and
wherein displaying the media file comprises:
displaying each of the plurality of media files in a corresponding boundary area of the virtual background, wherein different media files are displayed in different boundary areas based on matching respective textual tags (Mochizuki (¶0090, ¶0088 line 3-9, fig. 10), plurality of background videos are selected for screen content. Figure 10 suggests that video materials are displayed in rectangular with different sizes).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify background modification module of Baron in view of Barzuza in further view of Seo instead be a background modifier taught by Mochizuki, with a reasonable expectation of success. The motivation would be to “provide a content distribution system that can offer a user not only a selection of a content work but also a wider selection of a content element” (Mochizuki (¶0012 line 3-5)).
As Claim 10, the Claim is rejected for the same reasons as Claim 3.
As Claim 12, the Claim is rejected for the same reasons as Claim 7.
As Claim 14, the Claim is rejected for the same reasons as Claim 7.
As Claim 17, the Claim is rejected for the same reasons as Claim 3.
Response to Arguments
Double Patenting:
Applicant filed Terminal Disclaimer; therefore, Double Patenting Rejection(s) are respectfully withdrawn.
Rejections under 35 U.S.C. §103:
As Claims 1, Applicants argue that Baron does not disclose “a textual tag directly input by a user into the virtual back ground boundary area … ” (last paragraph of page 8 in the remarks).
PNG
media_image1.png
180
647
media_image1.png
Greyscale
Applicants’ arguments are not persuasive because Seo teaches the limitation(s). See the current rejection(s) for details.
As Claims 1, Applicants argue that Seo does not disclose “the keyword is saved for later comparision” (fourth paragraph of page 9 in the remarks).
PNG
media_image2.png
86
667
media_image2.png
Greyscale
Applicants’ arguments are not persuasive because the argument(s) is/are directed to a feature not in the Claim. Further amending the Claims to clarify these features might advance the prosecution.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NHAT HUY T NGUYEN whose telephone number is (571)270-7333. The examiner can normally be reached M-F: 12:00-8:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached on 571-270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NHAT HUY T NGUYEN/Primary Examiner, Art Unit 2147