DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 7, 15, and 20 are objected to because of the following informalities:
In Claim 7, lines 3-4, "mentioned the virtual interaction" should be read "mentioned in the virtual interaction".
In Claim 15, lines 5-6, "mentioned the virtual interaction" should be read "mentioned in the virtual interaction".
In Claim 20, line 5, "mentioned the virtual interaction" should be read "mentioned in the virtual interaction".
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference; nor is it that the claimed invention must be expressly suggested in any one or all of the references. Rather, the test is what the combined teachings of the references would have suggested to those of ordinary skill in the art. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981).
Claims 1-2, 4, 6, 8-12, 14, and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Kumahara (“Kumahara”, US 20170200128, included in IDS filed 11/8/2024) in view of Lipendin et al (“Lipendin”, US 20180247274).
Regarding Claim 1, Kumahara teaches a method comprising: accessing virtual interaction data associated with a virtual interaction (par 57; Fig. 1C, elements {104, 130}, par 99-101);
detecting a meeting intent for a future meeting based on the virtual interaction data (par 85; Fig. 1C, elements {104, 130}, par 99-101);
predicting multiple meeting attendees based on the virtual interaction data and account information associated with multiple participants in the virtual interaction (par 57; par 85; Fig. 1C, elements {104, 130}, par 99-101; The multiple attendees is the user and Joe Blake which are automatically included in the event. The account information is Joe Blake’s name tied to his mobile account. The event may be automatically generated. The attendees are predicted because they are included in the event based on the conversation.);
determining one or more meeting times based on the virtual interaction data and online calendar data associated with the multiple meeting attendees (par 57; par 85; Fig. 1C, elements {104, 130}, par 99-101; Fig. 2B, elements {212b, 212e} par 108; The multiple attendees is the user and Joe Blake which are automatically included in the event. The account information is Joe Blake’s name tied to his mobile account. The event may be automatically generated.);
generating a meeting title based on the virtual interaction data by using a generative artificial intelligence (Al) model (par 61-67; par 85; Fig. 1C, elements {104, 130}, par 99-101; The generative AI model is the event generation system which uses machine learning. The meeting title is the Description for the event.);
and providing a meeting schedule suggestion to a participant in the virtual interaction, the meeting schedule suggestion comprising the meeting title and the one or more meeting times (par 85; Fig. 1C, elements {104, 130}, par 99-101; The multiple attendees is the user and Joe Blake which are automatically included in the event. The account information is Joe Blake’s name tied to his mobile account.).
Kumahara does not explicitly teach the meeting schedule suggestion comprising identifications of the multiple attendees.
Lipendin teaches the meeting schedule suggestion comprising identifications of the multiple attendees (Fig. 5, element 518, par 60-61).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kumahara with the list of meetings screen of Lipendin because it allows for users to view upcoming meetings as well as view which participants are invited to those meetings before they occur.
Regarding Claim 2, Kumahara and Lipendin teach the method of claim 1.
Kumahara further teaches wherein the virtual interaction is an online chat session (par 54; par 57),
and wherein the virtual interaction data comprises real-time chat messages in the online chat session (par 54; par 57).
Regarding Claim 4, Kumahara and Lipendin teach the method of claim 1.
Kumahara further teaches wherein the virtual interaction is an email thread, and wherein the virtual interaction data comprises a sequence of emails (par 54; par 57; par 157).
Regarding Claim 6, Kumahara and Lipendin teach the method of claim 1.
Kumahara teaches further comprising: determining identifications of one or more users mentioned in the virtual interaction (Fig. 3C, element 320, par 116).
Regarding Claim 8, Kumahara and Lipendin teach the method of claim 1.
Kumahara teaches further comprising determining the participant receiving the meeting schedule suggestion based on the virtual interaction data in the virtual interaction (par 57; par 85; Fig. 1C, elements {104, 130}, par 99-101).
Regarding Claim 9, Kumahara and Lipendin teach the method of claim 1.
Kumahara further teaches wherein the meeting schedule suggestion is provided in an interactive graphical user interface (GUI) element presenting the meeting title, and the one or more meeting times (par 57; par 85; Fig. 1C, elements {104, 130}, par 99-101; par 147),
wherein the interactive GUI element is linked to a meeting application (par 9; par 57; par 85; Fig. 1C, elements {104, 130}, par 99-101; par 147),
wherein the meeting application is configured to provide an interactive scheduling window in response to the interactive GUI element being activated (par 9; par 57; par 85; Fig. 1C, elements {104, 130}, par 99-101; Fig. 2B, element 210, par 108; par 147; The interactive scheduling window is the window to modify event details.).
Kumahara does not explicitly teach wherein the meeting schedule suggestion is provided in an interactive graphical user interface (GUI) element presenting the identifications of the multiple attendees.
Lipendin teaches wherein the meeting schedule suggestion is provided in an interactive graphical user interface (GUI) element presenting the identifications of the multiple attendees (Fig. 5, element 518, par 60-61).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kumahara with the list of meetings screen of Lipendin because it allows for users to view upcoming meetings as well as view which participants are invited to those meetings before they occur.
Regarding Claim 10, Kumahara and Lipendin teach the method of claim 9.
Kumahara further teaches wherein the scheduling window is automatically filled with the meeting title, the identifications of the attendee, and a meeting time of the one or more meeting times (par 9; par 57; par 85; Fig. 1C, elements {104, 130}, par 99-101; Fig. 2B, element 210, par 108; par 147; The interactive scheduling window is the window to modify event details.).
Kumahara does not explicitly teach multiple attendees.
Lipendin teaches multiple attendees (Fig. 5, element 518, par 60-61).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kumahara with the list of meetings screen of Lipendin because it allows for users to view upcoming meetings as well as view which participants are invited to those meetings before they occur.
Regarding Claim 11, Kumahara and Lipendin teach the method of claim 9.
Kumahara further teaches wherein the interactive scheduling window is configured to receive a user input and update the meeting title, the identifications of the attendee, or the meeting time based on the user input (par 9; par 57; par 85; Fig. 1C, elements {104, 130}, par 99-101; Fig. 2B, element 210, par 108; par 147; The interactive scheduling window is the window to modify event details.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kumahara with the list of meetings screen of Lipendin because it allows for users to view upcoming meetings as well as view which participants are invited to those meetings before they occur.
Kumahara does not explicitly teach multiple attendees.
Lipendin teaches multiple attendees (Fig. 5, element 518, par 60-61).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kumahara with the list of meetings screen of Lipendin because it allows for users to view upcoming meetings as well as view which participants are invited to those meetings before they occur.
Regarding Claim 12, Kumahara teaches a system comprising: a communications interface; a non-transitory computer-readable medium; and one or more processors communicatively coupled to the communications interface and the non-transitory computer-readable medium, the one or more processors configured to execute processor-executable instructions stored in the non-transitory computer-readable medium (par 222-225).
The remainder of Claim 12 is rejected with the same reasoning as Claim 1.
Regarding Claim 14, Claim 14 is rejected with the same reasoning as Claim 6.
Regarding Claim 16, Claim 16 is rejected with the same reasoning as Claim 8.
Regarding Claim 17, Kumahara and Lipendin teach the system of claim 12.
Kumahara teaches wherein the meeting schedule suggestion is provided in an interactive graphical user interface (GUI) element presenting the meeting title and the one or more meeting times (par 57; par 85; Fig. 1C, elements {104, 130}, par 99-101; par 147),
wherein the interactive GUI element is linked to a meeting application (par 9; par 57; par 85; Fig. 1C, elements {104, 130}, par 99-101; par 147),
wherein the meeting application is configured to provide an interactive scheduling window in response to the interactive GUI element being activated (par 9; par 57; par 85; Fig. 1C, elements {104, 130}, par 99-101; Fig. 2B, element 210, par 108; par 147; The interactive scheduling window is the window to modify event details.),
wherein the scheduling window is automatically filled with the meeting title, the identifications of the attendee, and a meeting time of the one or more meeting times (par 9; par 57; par 85; Fig. 1C, elements {104, 130}, par 99-101; Fig. 2B, element 210, par 108; par 147; The interactive scheduling window is the window to modify event details.),
and wherein the interactive scheduling window is configured to receive a user input and update the meeting title, the identifications of the attendee, or the meeting time based on the user input (par 9; par 57; par 85; Fig. 1C, elements {104, 130}, par 99-101; Fig. 2B, element 210, par 108; par 147; The interactive scheduling window is the window to modify event details.).
Kumahara does not explicitly teach wherein the meeting schedule suggestion is provided in an interactive graphical user interface (GUI) element presenting the identifications of the multiple attendees; multiple attendees.
Lipendin teaches wherein the meeting schedule suggestion is provided in an interactive graphical user interface (GUI) element presenting the identifications of the multiple attendees (Fig. 5, element 518, par 60-61);
multiple attendees (Fig. 5, element 518, par 60-61).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kumahara with the list of meetings screen of Lipendin because it allows for users to view upcoming meetings as well as view which participants are invited to those meetings before they occur.
Regarding Claim 18, Claim 18 is rejected with the same reasoning as Claim 1.
Claims 3 is rejected under 35 U.S.C. 103 as being unpatentable over Kumahara and Lipendin in view of Quinn et al (“Quinn”, US 20110267419).
Regarding Claim 3, Kumahara and Lipendin teach the method of claim 1.
Kumahara and Lipendin do not explicitly teach wherein the virtual interaction is a virtual conference, and wherein the virtual interaction data comprises a live transcript for the virtual conference.
Quinn teaches wherein the virtual interaction is a virtual conference, and wherein the virtual interaction data comprises a live transcript for the virtual conference (par 9; par 82).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kumahara and Lipendin with the live conferencing functionality of Quinn because it allows for users to look at and talk to each other in real-time without being in the same location.
Claims 5, 13, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Kumahara and Lipendin in view of Fahrendorff et al (“Fahrendorff”, US 20210056860) and in further view of Tadesse et al (“Tadesse”, US 11558440).
Regarding Claim 5, Kumahara and Lipendin teach the method of claim 1.
Kumahara teaches further comprising: enabling a client device associated with the virtual interaction to determine a meeting intent (par 57; par 85; Fig. 1C, elements {104, 130}, par 99-101; par 147).
Kumahara and Lipendin do not explicitly teach enabling a device to determine a meeting intent by comparing the virtual interaction data to a set of pre-determined keywords; and verifying the meeting intent based on the virtual interaction data using a natural language processing (NLP) model.
Fahrendorff teaches enabling a device to determine a meeting intent by comparing the virtual interaction data to a set of pre-determined keywords (par 51; The meeting intent is the topic.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kumahara and Lipendin with the conferencing functionality of Fahrendorff because it allows for users to look at and talk to each other in real-time without being in the same location.
Kumahara, Lipendin, and Fahrendorff do not explicitly teach verifying the meeting intent based on the virtual interaction data using a natural language processing (NLP) model.
Tadesse teaches verifying the meeting intent based on the virtual interaction data using a natural language processing (NLP) model (Col. 8 lines 3-31; It is checked whether the topics discussed are relevant to the audio content. Therefore, the topics (meeting intent) is verified.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kumahara, Lipendin, and Fahrendorff with the content recording of Tadesse because it provides a way to revisit previous meetings that occurred in the past.
Regarding Claim 13, Claim 13 is rejected with the same reasoning as Claim 5.
Regarding Claim 19, Kumahara and Lipendin teach the non-transitory computer-readable medium of claim 18.
Kumahara and Lipendin do not explicitly teach further comprising processor-executable instructions configured to cause one or more processors to: determine a meeting intent by comparing the virtual interaction data to a set of pre-determined keywords; and verify the meeting intent based on the virtual interaction data using a natural language processing (NLP) model.
Fahrendorff teaches further comprising processor-executable instructions configured to cause one or more processors to: determine a meeting intent by comparing the virtual interaction data to a set of pre-determined keywords (par 51; The meeting intent is the topic.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kumahara and Lipendin with the conferencing functionality of Fahrendorff because it allows for users to look at and talk to each other in real-time without being in the same location.
Kumahara, Lipendin, and Fahrendorff do not explicitly teach verify the meeting intent based on the virtual interaction data using a natural language processing (NLP) model.
Tadesse teaches verify the meeting intent based on the virtual interaction data using a natural language processing (NLP) model (Col. 8 lines 3-31; It is checked whether the topics discussed are relevant to the audio content. Therefore, the topics (meeting intent) is verified.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kumahara, Lipendin, and Fahrendorff with the content recording of Tadesse because it provides a way to revisit previous meetings that occurred in the past.
Claims 7, 15, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kumahara and Lipendin in view of Polish et al (“Polish”, US 20220014571).
Regarding Claim 7, Kumahara and Lipendin teach the method of claim 6.
Kumahara and Lipendin do not explicitly teach teaches further comprising: determining a social graph based on historical virtual interaction data and profile data associated with the multiple participants and the one or more users mentioned the virtual interaction;
and predicting the multiple attendees based on the social graph.
Polish teaches further comprising: determining a social graph based on historical virtual interaction data and profile data associated with the multiple participants and the one or more users mentioned the virtual interaction (par 101; The social graph is the meeting graph.);
and predicting the multiple attendees based on the social graph (par 101; The social graph is the meeting graph.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kumahara and Lipendin with the meeting scores of Polish because it allows for predicting participant behavior for subsequent meetings (Polish; par 101), thereby improving quality of future meetings.
Regarding Claim 15, Claim 15 is rejected with the same reasoning as Claim 7.
Regarding Claim 20, Kumahara and Lipendin teach the non-transitory computer-readable medium of claim 18.
Kumahara and Lipendin do not explicitly teach teaches further comprising processor-executable instructions configured to cause one or more processors to: determine a social graph based on historical virtual interaction data and profile data associated with the multiple participants in the virtual interaction and one or more users mentioned the virtual interaction;and predict the multiple attendees based on the social graph.
Polish teaches further comprising processor-executable instructions configured to cause one or more processors to: determine a social graph based on historical virtual interaction data and profile data associated with the multiple participants in the virtual interaction and one or more users mentioned the virtual interaction (par 101; The social graph is the meeting graph.);
and predict the multiple attendees based on the social graph (par 101; The social graph is the meeting graph.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kumahara and Lipendin with the meeting scores of Polish because it allows for predicting participant behavior for subsequent meetings (Polish; par 101), thereby improving quality of future meetings.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Niess et al (US 20220368660), Abstract - Asynchronous collaboration via a communication platform is described. A message representative of an asynchronous meeting can be displayed via a user interface of a communication platform. The user interface can include an affordance to enable a user to add a snippet of content to the asynchronous meeting. In response to detecting an actuation of the affordance, an input user interface that includes an option to record or upload the snippet of content can be displayed. The snippet of content can be received from a client of a user associated with the asynchronous meeting and can be associated with other snippet(s) of content added by other user(s) associated with the asynchronous meeting. A preview summary of snippet(s) of content associated with the asynchronous meeting can be displayed in association with the message, wherein each snippet of content is viewable via a thread associated with the message.
Yasui (US 20230237440), Abstract - An information processing apparatus includes a processor configured to: receive an instruction to produce a meeting notice; acquire a schedule of a participant of a meeting, the schedule arranged within a meeting time of the meeting; if the participant is occupied during the meeting time, acquire log information on a conversation that the participant has made using a messenger application; and by analyzing the log information on the conversation, display information indicating whether holding of the meeting is possible during the meeting time.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RAQIUL AMIN CHOUDHURY whose telephone number is (571)272-2482. The examiner can normally be reached Monday-Friday 7:30 AM - 5:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Follansbee can be reached at 571-272-3964. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RAQIUL A CHOUDHURY/Examiner, Art Unit 2444