DETAILED ACTION
Status of the Claims
The following is a non-final Office Action in response to claims filed 10 April 2025.
Claims 1-20 are pending.
Claims 1-20 have been examined.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1, 3-4, 9-10, 12-13, 15, and 19-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-2, 5-7, and 16 of copending Application No. 17/740,022 (now Patent No. 12,277,609). Although the claims at issue are not identical, they are not patentably distinct as shown below:
19/175,600
17/740,022
1. One or more non-transitory computer-readable media storing computer- executable instructions that, when executed by a processor of an event hosting system, perform a method for matchmaking for a virtual event, the method comprising: matching, by the event hosting system, a first event attendee to a second event attendee in a networking pool to obtain a match, wherein the first event attendee is matched with the second event attendee based in part on a first set of interest data associated with the first event attendee and a second set of interest data associated with the second event attendee; presenting a graphical user interface (GUI) comprising a graphical element for the virtual event that is available to the first event attendee based on the match, wherein the graphical element presents a discussion topic for the virtual event, wherein the discussion topic is generated by the event hosting system from the first set of interest data and the second set of interest data based on the match; and initiating the virtual event upon selection of the graphical element, wherein initiating the virtual event comprises: dynamically modifying the GUI by presenting one or more of an audio feed and a video feed from a device of the second event attendee..
1. One or more non-transitory computer-readable media storing computer-executable instructions that, when executed by a processor of an event hosting system, perform a method for matchmaking for a virtual event, the method comprising: adding a first event attendee of a plurality of event attendees to a networking pool that is managed by the event hosting system; responsive to the networking pool reaching a capacity, matching, by the event hosting system, the first event attendee to a second event attendee in the networking pool to obtain a first match and one or more other event attendees from the plurality of event attendees to obtain at least a second match, wherein the first event attendee is matched with the second event attendee based in part on a first set of interest data associated with the first event attendee and a second set of interest data associated with the second event attendee; generating a first discussion topic for a first virtual event that is accessible by a first set of event attendees included as part of the first match, and a different second discussion topic for a second virtual event that is accessible by a second set of event attendees included as part of the second match, wherein the first discussion topic is generated from a first set of matched data corresponding to the matching of the first set of interest data and the second set of interest data and the second discussion topic is generated from a different second set of matched data on which the second match is made; presenting a graphical user interface (GUI) comprising a first graphical element for the first virtual event and a second graphical element for the second virtual event that are available to the first event attendee based on the matching of at least the first match and the second match, wherein the first graphical element presents the first discussion topic for the first virtual event, the first set of event attendees that have access to the first virtual event, and an identifier for any of the first set of event attendees that are actively participating in the first virtual event, and wherein the second graphical element presents the different second discussion topic for the second virtual event, the second set of event attendees that have access to the second virtual event, and an identifier for any of the second set of event attendees that are actively participating in the second virtual event; initiating the virtual event based on the matching of the first event attendee to the second event attendee and a selection of the first graphical element; monitoring the virtual event by analyzing interactions between the first event attendee and the second event attendee in video streams of the first event attendee and the second event attendee while the virtual event is active during a part of a first time limit; adjusting the first time limit set for the virtual event to a second time limit in response to the interactions between the first event attendee and the second event attendee being indicative of a successful or unsuccessful matching of the first event attendee to the second event attendee, wherein the second time limit is different than the first time limit; and responsive to the networking event reaching the second time limit, adding the first event attendee and the second event attendee back to the networking pool by modifying the GUI to terminate the virtual event and to present the networking pool with other virtual events.
The independent claims 1 and 16 of the copending Application No. 17/740,022 (now Patent No. 12,277,609, hereinafter ‘’609 Patent) are not identical to the instant claims 1, 10, and 19 but however claim the same inventive concept of matchmaking between attendees in network pools based upon interests and generating topics for discussion (the instant claims are much broader). Here, specifically, instant claim 1 is rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claim 1 of the ‘609 Patent. The claims differ in that instant claim 1 recites that the matching of users and generated graphical user interfaces for the virtual event a presented to the users whereas claim 1 of the ‘609 Patent recites that there is a second or additional discussion topic and second virtual event, as well as the ability to adjust time limits for the event (as highlighted in the table above). The portion of the specification in the ‘609 Patent that supports the recited second or additional discussion topic and second virtual event, as well as the ability to adjust time limits for the event aspects includes an embodiment that would anticipate instant claim 1 herein. Instant claim 1 cannot be considered patentably distinct over claim 1 of the ‘609 Patent when there is a specifically disclosed embodiment that supports claim 1 of that patent and falls within the scope of claim 1 herein because it would have been obvious to one having ordinary skill in the art to modify the method of claim 1 by selecting a specifically disclosed embodiment that supports that claim, i.e., the additional steps which include the second or additional discussion topics, events and ability to adjust the time limits for the events. One having ordinary skill in the art would have been motivated to do this because that embodiment is disclosed as being a preferred embodiment within claim 1. Instant independent claims 10 and 19 are rejected under the same rationale, mutatis mutandis.
Dependent claim 2 of the ‘609 Patent recite substantially similar subject matter as the instant claims 4 and 13.
Dependent claim 5 of the ‘609 Patent recite substantially similar subject matter as the instant claims 6 and 15.
Dependent claim 6 of the ‘609 Patent recite substantially similar subject matter as the instant claims 9 and 18.
Dependent claim 7 of the ‘609 Patent recite substantially similar subject matter as the instant claims 3 and 12.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims are directed to a process (an act, or series of acts or steps), a machine (a concrete thing, consisting of parts, or of certain devices and combination of devices), and a manufacture (an article produced from raw or prepared materials by giving these materials new forms, qualities, properties, or combinations, whether by hand labor or by machinery). Thus, each of the claims falls within one of the four statutory categories (Step 1). The claims recite an apparatus, method (process) and system with apparatuses however, the claim(s) recite(s) determining a match between users based upon interests and a networking pool which is an abstract idea of organizing human activities.
The limitations of “matching... a first event attendee to a second event attendee in a networking pool to obtain a match, wherein the first event attendee is matched with the second event attendee based in part on a first set of interest data associated with the first event attendee and a second set of interest data associated with the second event attendee....wherein the discussion topic is generated by the event hosting system from the first set of interest data and the second set of interest data based on the match,” as drafted, is a process that, under its broadest reasonable interpretation, covers organizing human activities--fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) but for the recitation of generic computer components (Step 2A Prong 1). That is, other than reciting “when executed by a processor...by the event hosting system,” (or “when executed by the processor, performs a method for matchmaking for the virtual event..., by the event hosting system” in claim 19) nothing in the claim element precludes the step from the methods of organizing human interactions grouping. For example, but for the “by the event hosting system,” (or “when executed by the processor, performs a method for matchmaking for the virtual event..., by the event hosting system” in claim 19) language, “matching” and “wherein the discussion topic is generated” in the context of this claim encompasses the user manually matching people based upon their interests and topics which is managing personal behavior. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation as one of the methods of organizing human activities but for the recitation of generic computer components, then it falls within the “Certain Methods of Organizing Human Activities” grouping of abstract ideas. Accordingly, the claim(s) recite(s) an abstract idea (Step 2A, Prong One: YES).
This judicial exception is not integrated into a practical application (Step 2A Prong Two). The “presenting a graphical user interface (GUI)...initiating the virtual event comprising dynamically modifying the GUI...” are simply insignificant post solution output. Next, the claims only recite one additional element – using a processor of an event hosting system to perform the steps. The processor in both steps is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of ranking information based on a determined amount of use) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Specifically the claims amount to nothing more than an instruction to apply the abstract idea using a generic computer or invoking computers as tools by adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.04(d)(I) discussing MPEP 2106.05(f). The claims recitation of the “virtual event” “event hosting system” and “graphical user interface” is/are only generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.04(d)(I) discussing MPEP 2106.05(h). Accordingly, the combination of these additional elements does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea, even when considered as a whole (Step 2A Prong Two: NO).
The claim does not include a combination of additional elements that are sufficient to amount to significantly more than the judicial exception (Step 2B). As discussed above with respect to integration of the abstract idea into a practical application (Step 2A Prong 2), the combination of additional elements of using a processor of an event hosting system to perform the steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Reevaluating here in step 2B, the “presenting a graphical user interface (GUI)...initiating the virtual event comprising dynamically modifying the GUI...” in the step(s) which are insignificant post solution activities are also determined to be well-understood, routine and conventional activity in the field. The Symantec, TLI, and OIP Techs court decisions in MPEP 2106.05(d)(II) indicate that the mere receipt or transmission of data over a network is well-understood, routine, and conventional function when it is claimed in a merely generic manner (as is here). Therefore, when considering the additional elements alone, and in combination, there is no inventive concept in the claim. As such, the claim(s) is/are not patent eligible, even when considered as a whole (Step 2B: NO).
Claims 2-7, 11-16, and 20 recite(s) the additional limitation(s) further including additional steps which is still directed towards the abstract idea previously identified (interests, obtain/monitor behavior of users, rules for matching) and is not an inventive concept that meaningfully limits the abstract idea. Again, as discussed with respect to claims 1, 10, and 19, the claims are simply limitations which are no more than mere instructions to apply the exception using a computer or with computing components. Accordingly, the additional element(s) does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Even when considered as a whole, the claims do not integrate the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Claims 8-9 and 17-18 recite(s) the additional limitation(s) further including additional processing steps (natural language processing, generation of a transcript) for the attendees interactions which is not an inventive concept that meaningfully limits the abstract idea. Again, as discussed with respect to claims 1, 10, and 19, the claims are simply limitations which are no more than mere instructions to apply the exception using a computer or with computing components. Accordingly, the additional element(s) does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Even when considered as a whole, the claims do not integrate the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Claims 1-20 are therefore not eligible subject matter, even when considered as a whole.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Peters (US PG Pub. 2021/0076002) further in view of Govindaraman (US PG Pub. 2014/0081882).
As per claims 1, 10, and 19, Peters discloses one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by a processor of an event hosting system, perform a method for matchmaking for a virtual event, a method and an event hosting system for matchmaking for a virtual event, the event hosting system comprising: a processor; a memory storing computer-executable instructions that, when executed by the processor, performs a method for matchmaking for the virtual event, the method comprising (memory, processor, computer code, network devices, Peters ¶102-¶106; For example, the server system may be a server of a video conferencing platform (e.g., ZOOM, SKYPE, MICROSOFT TEAMS, GOOGLE HANGOUTS MEET, CISCO WEBEX, etc.), ¶168):
presenting a graphical user interface (GUI) comprising a graphical element for the virtual event that is available to the first event attendee based on the match, wherein the graphical element presents a discussion topic for the virtual event, wherein the discussion topic is generated by the event hosting system from the first set of interest data and the second set of interest data based on the match (topics, Peters ¶157; conversation management hints, ¶234; move to another topic, ¶274; notifying users of hot button topics, ¶320; changing topics, shifting discussion, ¶356); and
initiating the virtual event upon selection of the graphical element, wherein initiating the virtual event comprises (video conferencing, live visual interaction, Peters ¶7-¶8):
dynamically modifying the GUI by presenting one or more of an audio feed and a video feed from a device of the second event attendee (video conferencing, live visual interaction, Peters ¶7-¶8; The moderator module 20 can also store and access mapping data 160 that indicates video conference management actions to be performed, either directly by the moderator module 20 or suggested for a user (e.g., a meeting organizer) to perform. For example, the mapping data 160 can indicate classifications and corresponding actions that the moderator module 20 can take to improve the video conference session when the corresponding classification is present. The actions may affect the current endpoint and the corresponding participant, ¶78; In some implementations, the system can change the amount of time allotted to speakers, or adjust the total meeting time (e.g., when to end the meeting or whether to extend the meeting) based on an algorithm to optimize a particular metric or as triggered by events or conditions detected during the communication session. For example, to allot speaking time to individuals, the system can assess the effects that speaking by an individual has on the engagement and emotion of other people. The system provides dynamic feedback, both showing how a person's actions (e.g., speech in a conference) affect others on the video conference, and showing the speaker how they are affecting others. For example, if one person speaks and engagement scores of others go up (or if positive emotion increases and/or negative emotion decreases), the system can extend the time allocated to that person. If a person speaks and engagement scores go down (or if positive emotion decreases and/or negative emotion increases), the system can decrease the speaking time allocation for that person. The system can also adjust the total meeting time. The system can assess the overall mood and collaboration scores of the participants to cut short meetings with low overall collaboration or to extend meetings that have high collaboration. As a result, the system can end some meetings early or extend others based on how engaged the participants are, ¶141; emotional or cognitive states of the participants, ¶274; For example, the server system may be a server of a video conferencing platform (e.g., ZOOM, SKYPE, MICROSOFT TEAMS, GOOGLE HANGOUTS MEET, CISCO WEBEX, etc.), ¶168).
While Peters discloses the ability to provide matches (Peters ¶131 and ¶268), Peters does not expressly disclose matching, by the event hosting system, a first event attendee to a second event attendee in a networking pool to obtain a match, wherein the first event attendee is matched with the second event attendee based in part on a first set of interest data associated with the first event attendee and a second set of interest data associated with the second event attendee.
However, Govindaraman teaches matching, by the event hosting system, a first event attendee to a second event attendee in a networking pool to obtain a match, wherein the first event attendee is matched with the second event attendee based in part on a first set of interest data associated with the first event attendee and a second set of interest data associated with the second event attendee (While actions described in FIG. 6 apply to the entire mega attendance event 108, the following actions apply particularly to selected sessions of the mega attendance event 108. Actions 710 and 730 of FIG. 7 most strongly differ from respective actions 610 and 630 of FIG. 6 because actions 710 and 730 include receiving "session-based user registration" and "session-based attendee list" instead of event-based user registration and event-based attendee list described in FIG. 6. Using this implementation, attendees can find matches among a smaller pool of session attendees, as opposed to a much larger pool of all the attendees of the mega attendance event 108. For the forgoing reason, FIG. 7 does not include action 690 described in FIG. 6, which includes "reporting matched attendees that are proximate to the verified user," as session attendees are often already in close proximity to each other. In other implementations, action 690 can be included in the method described in FIG. 7, Govindaraman ¶94; attributes, interests, skills, used for matches, ¶113-¶114).
The Peters and Govindaraman references are analogous in that both are directed towards/concerned with organizing events. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use Govindaraman’s ability to match users in and from pools of users in Peters’ system to improve the system and method with reasonable expectation that this would result in an event management system that is able to organize meetings and events.
The motivation being that an opportunity arises to help event attendees connect with people of interest present at a mega attendance event by taking into account social graphs and introduction preferences of the event attendees. Improved user experience and engagement and higher user satisfaction and retention may result (Govindaraman ¶5).
As per claims 2, 11, and 20, Peters and Govindaraman disclose as shown above with respect to claims 1, 10, and 19. Peters further discloses wherein the first set of interest data comprises at least one interest tag received from the first event attendee, and wherein the second set of interest data comprises at least one interest tag received from the second event attendee (preferences, interest of user, Peters ¶310).
As per claims 3 and 12, Peters and Govindaraman disclose as shown above with respect to claims 1 and 10. Govindaraman further teaches wherein the method further comprises: determining the discussion topic based in part on at least one shared interest between the first event attendee and the second event attendee (Social graph 105 can include online social networks of attendees on various social networking platforms like Chatter, Facebook, Twitter, LinkedIn, etc. In some implementations, social graph 105 can include records of other users in attendees' online social networks and further specify the relation and interaction types between the attendees and other users. In some implementations, social graph 105 can stratify, classify, categorize, and/or group other users in attendees' online social networks into "social graph tags" based on the preferences or interests of the attendees. The social graph tags can identify group of users in an attendee's online social network that have similar characteristics or attributes. Examples of such social graph tags or groups can include, without limitations, industry types, geographic territories, job functions, skills, expertise, products, services, age, gender, professional circles, degrees of separation, interaction strengths, social proximities, and location proximities, Govindaraman ¶28).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use Govindaraman’s ability to match users in and from pools of users in Peters’ system to improve the system and method with reasonable expectation that this would result in an event management system that is able to organize meetings and events.
The motivation being that an opportunity arises to help event attendees connect with people of interest present at a mega attendance event by taking into account social graphs and introduction preferences of the event attendees. Improved user experience and engagement and higher user satisfaction and retention may result (Govindaraman ¶5).
As per claims 4 and 13, Peters and Govindaraman disclose as shown above with respect to claims 1 and 10. Govindaraman further teaches obtaining a first set of behavioral data for the first event attendee and a second set of behavioral data for the second event attendee, wherein the first set of behavioral data and the second set of behavioral data comprises at least one of: a visited area associated with the virtual event, a length of time spent at the visited area, or an event agenda, and wherein the first event attendee and the second event attendee are further matched based in part on the first set of behavioral data and the second set of behavioral data (attendee location data, Govindaraman ¶37-¶38; sessions attended, check-in, ¶83; proximity, ¶94; see also An attendee list 106 for the mega attendance event 108 is accessed via communication network(s) 107 at action 520 by attendees of the mega attendance event 108. Attendee list 106 can include information related to the attendees of the mega attendance event 108. In one implementation, attendee-related information can include background information of the attendees, pictures of the attendees, biographic information of the attendees such as industries in which the attendees work in, geographic territories within which the attendees are professionally active, job functions of the attendees, and service providers of the attendees, contact information of the attendees, digital business cards, information of products or services offered or consumed by the attendees, advertising materials, technical specifications, written work product of the attendees, etc. This information can be written textual information, video information, digital pictures, audio information or other types of information stored in digital form, ¶75).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use Govindaraman’s ability to match users in and from pools of users in Peters’ system to improve the system and method with reasonable expectation that this would result in an event management system that is able to organize meetings and events.
The motivation being that an opportunity arises to help event attendees connect with people of interest present at a mega attendance event by taking into account social graphs and introduction preferences of the event attendees. Improved user experience and engagement and higher user satisfaction and retention may result (Govindaraman ¶5).
As per claims 5 and 14, Peters and Govindaraman disclose as shown above with respect to claims 1 and 10. Govindaraman further teaches wherein the method further comprises: adding the first event attendee and the second event attendee back to the networking pool upon ending the virtual event; and responsive to adding the first event attendee and the second event attendee back to the networking pool, matching the first event attendee to a third event attendee and the second event attendee to a fourth event attendee (While actions described in FIG. 6 apply to the entire mega attendance event 108, the following actions apply particularly to selected sessions of the mega attendance event 108. Actions 710 and 730 of FIG. 7 most strongly differ from respective actions 610 and 630 of FIG. 6 because actions 710 and 730 include receiving "session-based user registration" and "session-based attendee list" instead of event-based user registration and event-based attendee list described in FIG. 6. Using this implementation, attendees can find matches among a smaller pool of session attendees, as opposed to a much larger pool of all the attendees of the mega attendance event 108. For the forgoing reason, FIG. 7 does not include action 690 described in FIG. 6, which includes "reporting matched attendees that are proximate to the verified user," as session attendees are often already in close proximity to each other. In other implementations, action 690 can be included in the method described in FIG. 7, Govindaraman ¶94).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use Govindaraman’s ability to match users in and from pools of users in Peters’ system to improve the system and method with reasonable expectation that this would result in an event management system that is able to organize meetings and events.
The motivation being that an opportunity arises to help event attendees connect with people of interest present at a mega attendance event by taking into account social graphs and introduction preferences of the event attendees. Improved user experience and engagement and higher user satisfaction and retention may result (Govindaraman ¶5).
As per claims 6 and 15, Peters and Govindaraman disclose as shown above with respect to claims 1 and 10. Peters further discloses wherein the method further comprises: receiving a rule for matching a plurality of event attendees, wherein the first event attendee and the second event attendee are further matched based in part on the rule (The system can maintain profiles that represent different complex emotions or mental states, where each profile indicates a corresponding combination of emotion scores and potentially a pattern in which the scores change are maintained over time. The system compares the series of emotion data (e.g., a time series of emotion score vectors, occurrence or sequence of micro-expressions detected, etc.) with the profiles to determine whether and to what degree each person matches the profile. The system can then provide output to the members of a video conference or other communication session based on the results. For example, a person in a video conference may be provided a user interface that includes indicators showing the emotional states or engagement (e.g., collaboration score, participation score, etc.) of one or more of the other participants, Peters ¶119).
As per claims 7 and 16, Peters and Govindaraman disclose as shown above with respect to claims 1 and 10. Peters further discloses wherein the method further comprises: monitoring the virtual event between the first event attendee and the second event attendee; and modifying the graphical element to present a new topic in response to monitoring the virtual event and determining that a conversation subject has changed from the discussion topic to the new topic (time to take a break, Peters ¶132; The system can perform various actions based on the emotions and participant responses that it detects. For example, the system can prompt intervention in the meeting, prompt a speaker to change topics or change content, and so on, ¶134 and ¶274; determining intervention is needed, ¶137; end some meetings early, ¶141; keep current meeting short, ¶309).
As per claims 8 and 17, Peters and Govindaraman disclose as shown above with respect to claims 7 and 16. Peters further discloses wherein monitoring the virtual event comprises: performing a natural language processing of the audio feed; and detecting the new topic based on the natural language processing (In step 1804, as users participate in the virtual communication session, emotional intelligence and context data is compiled, filtered, and summarized for the user. Various types of data can be collected for a communication session, such as (1) a transcript of the conversation (entire or key-word summary), (2) facial expression data, emotional responses, cognitive attributes, etc., (3) voice stress analysis, and (4) speaking times for participants, as well as potentially biometric or physiological data (e.g., heart rate and blood pressure) gathered from Internet-of-Things (IOT) devices such as wearable devices. Data can be gathered for all participants in the communication session, not only to be able to determine an emotional map cookie for each participant but also to show how each individual reacts to the emotions and actions of the other participants. The processing of this data extracts key responses and events, filters out conditions that are not important, and summarizes the user's emotional and cognitive attributes and actions in the communication session, Peters ¶313) (Examiner interprets the processing of the participants of the communication sessions for a transcript as the ability to include natural language processing).
As per claims 9 and 18, Peters and Govindaraman disclose as shown above with respect to claims 7 and 16. Peters further discloses wherein monitoring the virtual event comprises: generating a transcript for at least a subset of the virtual event; extracting at least one keyword from the transcript; and determining the new topic based in part on the at least one keyword (In step 1804, as users participate in the virtual communication session, emotional intelligence and context data is compiled, filtered, and summarized for the user. Various types of data can be collected for a communication session, such as (1) a transcript of the conversation (entire or key-word summary), (2) facial expression data, emotional responses, cognitive attributes, etc., (3) voice stress analysis, and (4) speaking times for participants, as well as potentially biometric or physiological data (e.g., heart rate and blood pressure) gathered from Internet-of-Things (IOT) devices such as wearable devices. Data can be gathered for all participants in the communication session, not only to be able to determine an emotional map cookie for each participant but also to show how each individual reacts to the emotions and actions of the other participants. The processing of this data extracts key responses and events, filters out conditions that are not important, and summarizes the user's emotional and cognitive attributes and actions in the communication session, Peters ¶313).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure (additional art can be located on the PTO-892):
Boufarhat (US PG Pub. 2023/0196682) Systems and methods for creating and presenting virtual events.
Ostrand et al. (US PG Pub. 2022/0303321) Automatically detecting need for breakout virtual meeting.
Li et al. (US PG Pub. 2009/0055234) System and methods for scheduling meetings by matching a meeting profile with virtual resources.
Any inquiry concerning this communication or earlier communications from the Examiner should be directed to ANDREW B WHITAKER whose telephone number is (571)270-7563. The examiner can normally be reached on M-F, 8am-5pm, EST.
If attempts to reach the examiner by telephone are unsuccessful, the Examiner’s supervisor, Lynda Jasmin can be reached on (571) 272-6782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-
automated- interview-request-air-form
/ANDREW B WHITAKER/Primary Examiner, Art Unit 3629