DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claim 11 is provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 11-12 of co-pending Application No. 18/634,611 (reference application). This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Table: 1
Mapping of contending claims in the co-pending application that contains double patenting issues.
Current Application (18/634,606)
Co-pending Application (18/634,611)
11
11, 12
Table: 2
Current Application (18/634,606)
Co-pending Application (18/634,611)
Claim 11, A non-transitory computer readable medium storing instructions operable to cause one or more processors to perform operations comprising:
determining, by generative artificial intelligence software evaluating content of a video conference, one or more key points related to the video conference;
updating, by the generative artificial intelligence software, a virtual background image of a participant of the video conference to include one or more visual elements representing the one or more key points; and
outputting the updated virtual background image for use within a video stream of the participant during the video conference..
Claim 11, A non-transitory computer readable medium storing instructions operable to cause one or more processors to perform operations comprising:
obtaining, by generative artificial intelligence software, input associated with a video conference;
generating, by the generative artificial intelligence software, a virtual background image based on the input; and
outputting the virtual background image for use within multiple participant video streams during the video conference.
Claim 12.The non-transitory computer readable medium of claim 11, wherein the input corresponds to one or more key points related to the video conference and the virtual background image visually represents the one or more key points.
Claim 11 is rejected for obviousness type double patenting over claims 11-12 of the co-pending application for having similar limitations as described in Table 2. Although the conflicting claims are not identical, they are not patentably distinct from each other because the scope of the inventions is the same. Claim 11 of current application is an obvious variant and anticipated by claims 11-12 of the co-pending application 18/634,611.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-6, 8, 10-17, and 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kare et al. (US 20240056551 A1).
Regarding Claim 1, Kare discloses A method (ABST reciting “Methods and systems”), comprising:
determining, by generative artificial intelligence software evaluating content of a video conference, one or more key points related to the video conference; (ABST reciting “A virtual background generator may monitor a user's calendar and/or inbox for meetings. The virtual background generator may analyze the context of calendar invites and/or scheduled meetings to generate one or more virtual backgrounds for a video conference.” Further, ¶37 recites “In step 310, the computing device may train a machine learning model to generate one or more backgrounds based on one or more criteria contained in a scheduled meeting and/or a meeting invite. . . The machine learning model may support a generative adversarial network, a bidirectional generative adversarial network, an adversarial autoencoder, or an equivalent thereof.” ¶41-47 disclosing determining one or more key points using natural language techniques, and ¶47 reciting “Natural language processing may be used to parse the body of the meeting invitation 410. Natural language processing may identify keywords contained in the body field 410, while disregarding nonce words.”)
updating, by the generative artificial intelligence software, a virtual background image of a participant of the video conference to include one or more visual elements representing the one or more key points; (¶48 reciting “In step 340, the computing device may input each of the word embeddings described above into the machine learning model. The machine learning model may analyze the one or more word embeddings, from the set of word embeddings, to generate one or more backgrounds” Fig. 5 showing three updated virtual background 510, 520, 530 including visual elements of a corporate logo, a title of the meeting, and/or an agenda for the meeting.) and
outputting the updated virtual background image for use within a video stream of the participant during the video conference. (Fig. 3, step 370: cause the virtual background to be displayed. ¶49 reciting “the computing device may cause a first background, of the one or more backgrounds, to be displayed during the video conference. . . cause each of the different selections to be displayed on the respective user devices during the video conference ”)
Regarding Claim 2. Kare discloses The method of claim 1, wherein the content includes screen share content shared to the video conference, and wherein determining the one or more key points related to the video conference comprises:
deriving the one or more key points from the screen share content.
(¶6 reciting “The method may include training a machine learning model to generate one or more backgrounds based on criteria contained in a scheduled meeting and/or a calendar invite. The criteria may include information included in the meeting invitation (e.g., attachment(s) to the meeting, agenda associated with the meeting, list of attendees, message body, etc.), as well as topics discussed during the meeting (e.g., words or phrases spoken by participants, gestures made by participants, messages sent in chat by participants, etc.)”)
Regarding Claim 3, Kare discloses The method of claim 1, wherein the content includes an agenda of the video conference, and wherein determining the one or more key points related to the video conference comprises:
deriving the one or more key points from the agenda.
(¶6 reciting “The method may include training a machine learning model to generate one or more backgrounds based on criteria contained in a scheduled meeting and/or a calendar invite. The criteria may include information included in the meeting invitation (e.g., attachment(s) to the meeting, agenda associated with the meeting”)
Regarding Claim 4. Kare discloses The method of claim 1, wherein determining the one or more key points related to the video conference comprises:
determining the one or more key points based on one or more of a body language, speech emphasis, or language usage of a speaker during the video conference.
(¶6 reciting “The method may include training a machine learning model to generate one or more backgrounds based on criteria contained in a scheduled meeting and/or a calendar invite. The criteria may include information included in the meeting invitation (e.g., attachment(s) to the meeting, agenda associated with the meeting, list of attendees, message body, etc.), as well as topics discussed during the meeting (e.g., words or phrases spoken by participants, gestures made by participants, messages sent in chat by participants, etc.)”)
Regarding Claim 5. Kare discloses The method of claim 1, wherein updating the virtual background image of the participant of the video conference to include the one or more visual elements representing the one or more key points comprises:
determining, by the generative artificial intelligence software, an arrangement of the one or more visual elements based on one or more of a substance of the one or more key points, an order in which the one or more key points are addressed within the video conference, or an inferred importance of the one or more key points.
(Fig. 5 showing the arrangement of the visual elements, e.g. a corporate logo, a title of the meeting, and/or an agenda for the meeting, based on one or more of a substance of the key points, an order in which the one or more key points are addressed within the video conference, or an inferred importance of the one or more key points.)
Regarding Claim 6. Kare discloses The method of claim 1, wherein updating the virtual background image of the participant of the video conference to include the one or more visual elements representing the one or more key points comprises:
emphasizing a visual element of the one or more visual elements based on one or more of an active discussion of a key point corresponding to the visual element or an inferred importance of the key point.
(¶57 reciting “Users may wish to have the background update during the meeting based on topics being discussed during the meeting. For instance, a user who has a presentation as their background may wish to have the presentation advance to the next slide by saying a phrase, such as “next slide,””)
Regarding Claim 8. Kare discloses The method of claim 1, wherein outputting the updated virtual background image for use with the video stream of the participant during the video conference comprises:
outputting the updated virtual background image for use with multiple video streams each corresponding to a different participant during the video conference. (Figs. 7A and 7B showing the updated virtual background image used with multiple video streams each corresponding to a different participant during the video conference. ¶58 and ¶64)
Regarding Claim 10. Kare discloses The method of claim 1, comprising:
determining, by the generative artificial intelligence software during the video conference, one or more previous key points related to a previous portion of the video conference or of a previous video conference;
further updating, by the generative artificial intelligence software, the virtual background image of the participant of the video conference to include one or more second visual elements representing the one or more previous key points; and
outputting the further updated virtual background image for use within the video stream of the participant during the video conference.
(Figs. 7A and 7B. ¶64 reciting “The example shown in FIG. 7B continues the example that started above with the description of FIG. 7A. As shown in FIG. 7B, the second background may include additional agenda items that have been discussed.”)
Regarding Claim 11. Kare discloses A non-transitory computer readable medium storing instructions operable to cause one or more processors to perform operations (¶65 reciting “One or more aspects discussed herein may be embodied in computer-usable or readable data and/or computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices as described herein.”) comprising:
determining, by generative artificial intelligence software evaluating content of a video conference, one or more key points related to the video conference;
updating, by the generative artificial intelligence software, a virtual background image of a participant of the video conference to include one or more visual elements representing the one or more key points; and
outputting the updated virtual background image for use within a video stream of the participant during the video conference.
(See Claim 1 rejections for detailed analysis)
Regarding Claim 12. Kare discloses The non-transitory computer readable medium of claim 11, wherein determining the one or more key points related to the video conference comprises:
determining the one or more key points based on one or more of screen share content shared to the video conference, an agenda of the video conference, or activity of a speaker during the video conference.
(See Claim 2-4 rejections for detailed analysis)
Regarding Claim 13. Kare discloses The non-transitory computer readable medium of claim 11, wherein one or more of an arrangement or an emphasis of the one or more visual elements is based on one or more of a substance of the one or more key points, an order in which the one or more key points are addressed within the video conference, an active discussion of the one or more key points, or an inferred importance of the one or more key points.
(See Claim 5-6 rejections for detailed analysis)
Regarding Claim 14. Kare discloses The non-transitory computer readable medium of claim 11, wherein the visual background image is updated multiple times during the video conference to change the one or more visual elements.
(¶57 reciting “Users may wish to have the background update during the meeting based on topics being discussed during the meeting. For instance, a user who has a presentation as their background may wish to have the presentation advance to the next slide by saying a phrase, such as “next slide,””. Figs. 7A and 7B. ¶58, ¶64. )
Regarding Claim 15. Kare discloses A system (ABST reciting “Systems”), comprising:
a memory subsystem; and processing circuitry configured to execute instructions stored in the memory subsystem to: (¶65 reciting “One or more aspects discussed herein may be embodied in computer-usable or readable data and/or computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices as described herein.”)
determine, by generative artificial intelligence software evaluating content of a video conference, one or more key points related to the video conference;
update, by the generative artificial intelligence software, a virtual background image of a participant of the video conference to include one or more visual elements representing the one or more key points; and
output the updated virtual background image for use within a video stream of the participant during the video conference.
(See Claim 1 rejections for detailed analysis)
Regarding Claim 16. Kare discloses The system of claim 15, wherein the one or more key points are derived based on one or more of screen share content shared to the video conference, an agenda of the video conference, or activity of a speaker during the video conference.
(See Claim 2-4 rejections for detailed analysis)
Regarding Claim 17. Kare discloses The system of claim 15, wherein the one or more visual elements are arranged within the virtual background image based on one or more of a substance of the one or more key points, an order in which the one or more key points are addressed within the video conference, or an inferred importance of the one or more key points.
(See Claim 5-6 rejections for detailed analysis)
Regarding Claim 20. Kare discloses The system of claim 15, wherein the virtual background image is further updated during the video conference according to input obtained by the generative artificial intelligence software.
(¶57 reciting “Users may wish to have the background update during the meeting based on topics being discussed during the meeting. For instance, a user who has a presentation as their background may wish to have the presentation advance to the next slide by saying a phrase, such as “next slide,” or performing a swiping gesture.”)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 7 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kare et al. (US 20240056551 A1) as applied to claims 1, 11, and 15 above, and further in view of Jorasch et al. (US 20230254412 A1).
Regarding Claim 7. Kare discloses The method of claim 1.
However, Kare does not explicitly disclose wherein updating the virtual background image of the participant of the video conference to include the one or more visual elements representing the one or more key points comprises:
removing one or more previously determined visual elements from the virtual background image based on one or more of an expiration of a time threshold or a meeting of a space threshold for the virtual background image.
Jorasch teaches “ systems, methods, and apparatus for improving meetings” (¶4). More specifically, ¶289 recites “a user might show some appreciation for an insightful statement from caller image 4415d by dragging a star symbol into her grid location. This star might be visible only to caller 4415d, only to members of her functional group, or visible to all call participants. The star could remain for a fixed period of time (e.g. two minutes)”. In other words, Jorasch teaches a previously displayed visual element being removed based on a time threshold, e.g. two minutes.
It would have been obvious to one with ordinary skill, before the effective filing date of the claimed invention, to combine the teachings from Kare and Jorasch to remove one or more previously determined visual elements from the virtual background image based on an expiration of a time threshold. The suggestions/motivations would have been “for improving meetings” (¶4), and to apply a known technique to a known device (method, or product) ready for improvement to yield predictable results.
Regarding Claim 18. Kare in view of Jorasch discloses The system of claim 15, wherein a visual element of the one or more visual elements is emphasized using one or more font modifiers. (Jorasch, ¶3004 reciting “The layout and appearance of slides, documents, and software could dynamically respond to eye gaze. For example, the central controller or software controller could rearrange the positioning of information, change the size of images, alter font attributes (type, size, color, emphasis) increase cursor size and manipulate other visual aspects of digital artifacts and user interfaces to place information in areas of high attention.” The suggestions/motivations would have been the same as that of Claim 7 rejections.)
Claim(s) 9 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kare et al. (US 20240056551 A1) as applied to claims 1, 11, and 15 above, and further in view of Nelson et al. (US 20190273767 A1).
Regarding Claim 9. Kare discloses The method of claim 1.
However, Kare does not explicitly disclose comprising: outputting the one or more visual elements to a whiteboard or document associated with the video conference.
Nelson teaches “conducting electronic meetings over computer networks using interactive whiteboard appliances (IWBs)” (¶2). More specifically, ¶117 recites “Embodiments further include improvements to the presentation of content on interactive whiteboard appliances, providing meeting services for meeting attendees, agenda extraction”. In other words, Nelson teaches outputting the one or more visual elements (i.e. agenda extraction) to a whiteboard.
It would have been obvious to one with ordinary skill, before the effective filing date of the claimed invention, to modify the method (taught by Kare) to output the one or more visual elements (e.g. agenda extraction) to a whiteboard (taught by Nelson). The suggestions/motivations would have been for “an improvement in conducting electronic meetings over computer networks using Interactive Whiteboard (IWB) appliances” (¶6), and to apply a known technique to a known device (method, or product) ready for improvement to yield predictable results.
Regarding Claim 19. Kare in view of Nelson discloses The system of claim 15, wherein the one or more visual elements are output to a source external to the video conference. (Nelson, teaches outputting the one or more visual elements (i.e. agenda extraction) to a whiteboard in ¶117. In addition, ¶6 teaching the interactive whiteboard (IWB) appliances are external to the video conference, and reciting “The meeting manager causes the translated text in the second language to be transmitted over the one or more computing devices to the IWB appliance, wherein the IWB appliance displays the translated text in the second language during the electronic meeting. ” The suggestions/motivations would have been the same as that of Claim 9 rejections.)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YI WANG whose telephone number is (571)272-6022. The examiner can normally be reached 9am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571)272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YI WANG/Primary Examiner, Art Unit 2619