DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
1. This action is responsive to an amendment filed on 09/15/2025. Claims 1-17 are pending.
Response to Arguments
2. Applicants arguments filed in the 09/15/2025 remarks have been fully considered but are moot in view of new ground(s) of rejection which is deemed appropriate to address all of the needs at this time.
Prior art Iga was used in the previous action, and Iga was used again for the claim limitation still provided. New prior art is provided for the amended limitation provided in the claims.
Claim Rejections - 35 USC § 103
3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
4. Claim(s) 1, 14, 15 are rejected under 35 U.S.C. 103 as being unpatentable over Iga et al. (US 2012/0219140) in view of Kuhlke et al. (US 2008/0320082) in further view of Avida et al. (US 2022/0014568).
Regarding claim 1, Iga teaches an apparatus comprising: at least one processor; and at least one memory storing instructions which , when executed by the at least one processor, cause the apparatus at least to perform: providing a personalised To-Do list for each of a plurality of participants of a virtual meeting or presentation (see fig. 2-4, ¶ 0061-0068, 0091. The conferencing participants each will have an agenda. Each participant has a speech time, thus a schedule time for each discussion of topic. The conference progress supporting apparatus which allows the setting of each scheduled discussion time of each agenda in a conference, the determination of schedule of the agendas, and, when each agenda is completed. Speech information acquisition section 23 acquires information indicating the start and the end of the speech of each participant of each agenda via the input device.).
Iga disclose a conferencing system wherein participants will each have an agenda (task) wherein they will speak when their time is scheduled to speak. Iga is vague on providing each of the one or more participants of the non-focus group with the personalised To- Do list for that participant; receiving a plurality of interaction inputs for one or more of a plurality of time periods of the virtual meeting or presentation from the plurality of participants of the virtual meeting or presentation, the plurality of interaction inputs from the plurality of participants indicative of focus of each of the plurality of participants; determining, for each time period, one or more participants of the plurality of participants to allocate to a non-focus group based, at least partially, on one or more of the plurality of interaction inputs indicating that focus of the one or more participants has shifted away from the virtual meeting or presentation.
Kuhlke teaches receiving a plurality of interaction inputs for one or more of a plurality of time periods of the virtual meeting or presentation from the plurality of participants of the virtual meeting or presentation, the plurality of interaction inputs from the plurality of participants indicative of focus of each of the plurality of participants; determining, for each time period, one or more participants of the plurality of participants to allocate to a non-focus group based, at least partially, on one or more of the plurality of interaction inputs indicating that focus of the one or more participants has shifted away from the virtual meeting or presentation (see fig. 1-3, ¶ 0017, 0043, 0045-0046. The system tracks a plurality of participants in a conferencing session and aggregates attention metric to identify focus of participates in the session. The system displays display of the respective participant focus of attention metrics also can assist a presenter in identifying groups of meeting participants that have different levels of attention, for example a high-attention group, a moderate-attention group, and/or a low-attention group. A meeting attention tracker generates individual participant focus of attention metrics. A participant list also can be grouped by attention level. The meeting attention tracker can separately classify this group as a high focus of attention group, and separately display this high focus of attention group to the presenter meeting window or within the focus of attention report. The meeting attention tracker can classify the group as a low focus of attention group, and add the low focus of attention group to the presenter meeting window or the focus of attention report displayed in the presenters meeting window. Aggregated set where the metric information is accumulated over time (each time period) during a conferencing session. This would constitute a session time when aggregating data for the plurality of participants.).
The combination of Kuhlke to Iga provides grouping participants based on attention (focus) during a conferencing session. Thus when a participant is lacking focus, they will be grouped with others that lack focus during the meeting.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Iga to incorporate monitoring a conferencing session to determine focus (attention) during the session. The modification provides for the grouping participants that lack the focus during the meeting.
Avida teaches providing each of the one or more participants of the non-focus group with the personalised To- Do list for that participant (see ¶ 0035. Tasks can be assigned to the group which the group will individually have to work on together. This is a known function when assigning tasks to groups.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Iga and Kuhlke to incorporate assignee tasks to a group of individuals. The modification provides for the assigning a tasks to a group of participants.
Regarding claim 14, Iga teaches a processor implemented method comprising: providing a personalised To-Do list for each of a plurality of participants of a virtual meeting or presentation (see fig. 2-4, ¶ 0061-0068, 0091. The conferencing participants each will have an agenda. Each participant has a speech time, thus a schedule time for each discussion of topic. The conference progress supporting apparatus which allows the setting of each scheduled discussion time of each agenda in a conference, the determination of schedule of the agendas, and, when each agenda is completed. Speech information acquisition section 23 acquires information indicating the start and the end of the speech of each participant of each agenda via the input device.).
Iga disclose a conferencing system wherein participants will each have an agenda (task) wherein they will speak when their time is scheduled to speak. Iga is vague on receiving a plurality of interaction inputs for one or more of a plurality of time periods of the virtual meeting or presentation from the plurality of participants of the virtual meeting or presentation, the plurality of interaction inputs from the plurality of participants indicative of focus of each of the plurality of participants; determining, by at least one processor, for each time period, one or more participants of the plurality of participants to allocate to a non-focus group based, at least partially, on one or more of the plurality of interaction inputs indicating that focus of the one or more participants has shifted away from the virtual meeting or presentation; and providing each of the one or more participants of the non-focus group with the personalised To-Do list for that participant..
Kuhlke teaches receiving a plurality of interaction inputs for one or more of a plurality of time periods of the virtual meeting or presentation from the plurality of participants of the virtual meeting or presentation, the plurality of interaction inputs from the plurality of participants indicative of focus of each of the plurality of participants; determining, by at least one processor, for each time period, one or more participants of the plurality of participants to allocate to a non-focus group based, at least partially, on one or more of the plurality of interaction inputs indicating that focus of the one or more participants has shifted away from the virtual meeting or presentation (see fig. 1-3, ¶ 0017, 0043, 0045-0046. The system tracks a plurality of participants in a conferencing session and aggregates attention metric to identify focus of participates in the session. The system displays display of the respective participant focus of attention metrics also can assist a presenter in identifying groups of meeting participants that have different levels of attention, for example a high-attention group, a moderate-attention group, and/or a low-attention group. A meeting attention tracker generates individual participant focus of attention metrics. A participant list also can be grouped by attention level. The meeting attention tracker can separately classify this group as a high focus of attention group, and separately display this high focus of attention group to the presenter meeting window or within the focus of attention report. The meeting attention tracker can classify the group as a low focus of attention group, and add the low focus of attention group to the presenter meeting window or the focus of attention report displayed in the presenters meeting window. Aggregated set where the metric information is accumulated over time (each time period) during a conferencing session. This would constitute a session time when aggregating data for the plurality of participants.).
The combination of Kuhlke to Iga provides grouping participants based on attention (focus) during a conferencing session. Thus when a participant is lacking focus, they will be grouped with others that lack focus during the meeting.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Iga to incorporate monitoring a conferencing session to determine focus (attention) during the session. The modification provides for the grouping participants that lack the focus during the meeting.
Avida teaches providing each of the one or more participants of the non-focus group with the personalised To- Do list for that participant (see ¶ 0035. Tasks can be assigned to the group which the group will individually have to work on together. This is a known function when assigning tasks to groups.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Iga and Kuhlke to incorporate assignee tasks to a group of individuals. The modification provides for the assigning a tasks to a group of participants.
Regarding claim 15, Iga teaches a non-transitory processor-readable medium having stored thereon instructions which, when executed by a least one processor of an apparatus, cause the apparatus to perform at least the following: providing a personalised To-Do list for each of a plurality participants of a virtual meeting or presentation; receiving a plurality of interaction inputs for one or more of a plurality of time periods of the virtual meeting or presentation from the one or more participants of the virtual meeting or presentation (see fig. 2-4, ¶ 0061-0068, 0091. The conferencing participants each will have an agenda. Each participant has a speech time, thus a schedule time for each discussion of topic. The conference progress supporting apparatus which allows the setting of each scheduled discussion time of each agenda in a conference, the determination of schedule of the agendas, and, when each agenda is completed. Speech information acquisition section 23 acquires information indicating the start and the end of the speech of each participant of each agenda via the input device.).
Iga disclose a conferencing system wherein participants will each have an agenda (task) wherein they will speak when their time is scheduled to speak. Iga is vague on receiving a plurality of interaction inputs for one or more of a plurality of time periods of the virtual meeting or presentation from the plurality of participants of the virtual meeting or presentation, the plurality of interaction inputs from the plurality of participants indicative of focus of each of the plurality of participants; determining, for each time period, one or more participants of the plurality of participants to allocate to a non-focus group based, at least partially, on one or more of the plurality of interaction inputs indicating that focus of the one or more participants has shifted away from the virtual meeting or presentation; and providing each of the one or more participants of the non-focus group with the personalised To-Do list for that participant.
Kuhlke teaches receiving a plurality of interaction inputs for one or more of a plurality of time periods of the virtual meeting or presentation from the plurality of participants of the virtual meeting or presentation, the plurality of interaction inputs from the plurality of participants indicative of focus of each of the plurality of participants; determining, for each time period, one or more participants of the plurality of participants to allocate to a non-focus group based, at least partially, on one or more of the plurality of interaction inputs indicating that focus of the one or more participants has shifted away from the virtual meeting or presentation (see fig. 1-3, ¶ 0017, 0043, 0045-0046. The system tracks a plurality of participants in a conferencing session and aggregates attention metric to identify focus of participates in the session. The system displays display of the respective participant focus of attention metrics also can assist a presenter in identifying groups of meeting participants that have different levels of attention, for example a high-attention group, a moderate-attention group, and/or a low-attention group. A meeting attention tracker generates individual participant focus of attention metrics. A participant list also can be grouped by attention level. The meeting attention tracker can separately classify this group as a high focus of attention group, and separately display this high focus of attention group to the presenter meeting window or within the focus of attention report. The meeting attention tracker can classify the group as a low focus of attention group, and add the low focus of attention group to the presenter meeting window or the focus of attention report displayed in the presenters meeting window. Aggregated set where the metric information is accumulated over time (each time period) during a conferencing session. This would constitute a session time when aggregating data for the plurality of participants.).
The combination of Kuhlke to Iga provides grouping participants based on attention (focus) during a conferencing session. Thus when a participant is lacking focus, they will be grouped with others that lack focus during the meeting.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Iga to incorporate monitoring a conferencing session to determine focus (attention) during the session. The modification provides for the grouping participants that lack the focus during the meeting.
Avida teaches providing each of the one or more participants of the non-focus group with the personalised To- Do list for that participant (see ¶ 0035. Tasks can be assigned to the group which the group will individually have to work on together. This is a known function when assigning tasks to groups.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Iga and Kuhlke to incorporate assignee tasks to a group of individuals. The modification provides for the assigning a tasks to a group of participants.
5. Claim(s) 2 is rejected under 35 U.S.C. 103 as being unpatentable over Iga et al. (US 2012/0219140) in view of Kuhlke et al. (US 2008/0320082) in further view of Avida et al. (US 2022/0014568) in further view of West (US 2007/0209010).
Regarding claim 2, Iga, Kuhlke and Avida do not teach the apparatus as claimed in claim 1, wherein the instructions, when executed by the at least one processor, further cause the apparatus to least to perform: receiving an input from at least one of the one or more participants of the non-focus group indicating an intent corresponding to at least one task included in the personalised To-Do list of that participant, wherein the intent includes one or more of: starting a task; pausing a task, and completing a task.
West teaches receiving an input from at least one of the one or more participants of the non-focus group indicating an intent corresponding to at least one task included in the personalised To-Do list of that participant, wherein the intent includes one or more of: starting a task; pausing a task, and completing a task (see ¶ 0022. A participant can begin task by clicking the begin button which will start the task.).
The combination of West to Iga, Kuhlke and Avida will teach a user clicking a button to begin a selected task. The agenda as provided by Iga, Kuhlke and Avida can be the task to be started.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Iga, Kuhlke and Avida to incorporate a participant being able to begin a task. The modification provides for the system to begin a task.
6. Claim(s) 3 is rejected under 35 U.S.C. 103 as being unpatentable over Iga et al. (US 2012/0219140) in view of Kuhlke et al. (US 2008/0320082) in further view of Avida et al. (US 2022/0014568).
Regarding claim 3, Iga teaches the apparatus as claimed in claim 1, wherein the instructions, when executed by the at least one processor, further cause the apparatus to least to perform: saving audio data of the virtual meeting or presentation for each participant of the non-focus group (see ¶0010, 0061. Storing the progress of the conferencing session.).
7. Claim(s) 4 is rejected under 35 U.S.C. 103 as being unpatentable over Iga et al. (US 2012/0219140) in view of Kuhlke et al. (US 2008/0320082) in further view of Avida et al. (US 2022/0014568) in further view of Reynolds (US 2015/0012270).
Regarding claim 4, Iga, Kuhlke and Avida do not teach the apparatus as claimed in claim 3, wherein for each participant of the non-focus group, saving the audio data starts when the participant is allocated to the non-focus group and stops when the participant is allocated to a focus group, or saving the audio data starts in response to an input from the participant indicating an intent to start a task and stops in response to an input from the participant indicating an intent to pause a task and/or complete a task.
Alternative language “or” is presented, thus examiner will select an claim limitation.
Reynolds teaches saving the audio data starts in response to an input from the participant indicating an intent to start a task and stops in response to an input from the participant indicating an intent to pause a task and/or complete a task (see ¶ 0069. The audio clips are stored upon the recording stop and start of the clips in a conference topic.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Iga, Kuhlke and Avida to incorporate recording and storing audio clips of a topic during a session from start and stop points in the session. The modification provides for the recording audio topics during a conferencing session.
8. Claim(s) 5 is rejected under 35 U.S.C. 103 as being unpatentable over Iga et al. (US 2012/0219140) in view of Kuhlke et al. (US 2008/0320082) in further view of Avida et al. (US 2022/0014568) in further view of White et al. (US 10,742,817).
Regarding claim 5, Iga does not teach the apparatus as claimed in claim 1, wherein the instructions, when executed by the at least one processor, further cause the apparatus to least to perform: determining, for each time period, one or more participants of the virtual meeting or presentation to allocate to a focus group based, at least partially, on one or more of the plurality of interaction inputs; and preventing the one or more participants of the focus group from receiving the personalised To-Do list.
Kuhlke teaches determining, for each time period, one or more participants of the virtual meeting or presentation to allocate to a focus group based, at least partially, on one or more of the plurality of interaction inputs (see fig. 1-3, ¶ 0017, 0043, 0045-0046. The system tracks a plurality of participants in a conferencing session and aggregates attention metric to identify focus of participates in the session. The system displays display of the respective participant focus of attention metrics also can assist a presenter in identifying groups of meeting participants that have different levels of attention, for example a high-attention group, a moderate-attention group, and/or a low-attention group. A meeting attention tracker generates individual participant focus of attention metrics. A participant list also can be grouped by attention level. The meeting attention tracker can separately classify this group as a high focus of attention group, and separately display this high focus of attention group to the presenter meeting window or within the focus of attention report. The meeting attention tracker can classify the group as a low focus of attention group, and add the low focus of attention group to the presenter meeting window or the focus of attention report displayed in the presenters meeting window. Aggregated set where the metric information is accumulated over time (each time period) during a conferencing session. This would constitute a session time when aggregating data for the plurality of participants.).
The combination of Kuhlke to Iga and Avida provides grouping participants based on attention (focus) during a conferencing session. Thus when a participant is lacking focus, they will be grouped with others that lack focus during the meeting.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Iga and Avida to incorporate monitoring a conferencing session to determine focus (attention) during the session. The modification provides for the grouping participants that lack the focus during the meeting.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Iga and Avida to incorporate a topic of discussion that is assigned to a group which would be everyone in that group personally. The modification provides for the group and all in the group to be assigned agenda.
White teaches preventing the one or more participants of the focus group from receiving the personalised To-Do list (see col. 5, lines 34-56. A host or moderator can prevent or restrict sharing files with participants. This can be conclusive with prevent sharing agenda or tasks with participants.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Iga, Kuhlke and Avida to incorporate a preventing sharing files with participants in a session or group. The modification provides for preventing file sharing with participants.
9. Claim(s) 6 is rejected under 35 U.S.C. 103 as being unpatentable over Iga et al. (US 2012/0219140) in view of Kuhlke et al. (US 2008/0320082) in further view of Avida et al. (US 2022/0014568) in further view of White et al. (US 10,742,817) in further view of Han et al. (US 11,336,865).
Regarding claim 6, Iga, Kuhlke, Avida and White do not teach the apparatus as claimed in claim 5, wherein for each time period, if it determined that one or more participants of the non-focus group is allocated to the focus group, notifying the one or more participants that they have been allocated to the focus group.
Han teaches wherein for each time period, if it determined that one or more participants of the non-focus group is allocated to the focus group, notifying the one or more participants that they have been allocated to the focus group (col. 15, lines 1-19. A notification is presented as a pop up to a participant for a breakout room.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Iga, Kuhlke, Avida and White to incorporate a notification for a breakout room for a participant. The modification provides for notifying a participant of a breakout room.
10. Claim(s) 7 is rejected under 35 U.S.C. 103 as being unpatentable over Iga et al. (US 2012/0219140) in view of Kuhlke et al. (US 2008/0320082) in further view of Avida et al. (US 2022/0014568) in further view of White et al. (US 10,742,817) in further view of Han et al. (US 11,336,865) in further view of O’Gorman et al. (US 2019/0378076).
Regarding claim 7, Iga, Kuhlke, Avida, White and Han do not teach the apparatus as claimed in claim 6, wherein the instructions, when executed by the at least one processor, further cause the apparatus to least to perform: prompting the one or more participants of the non-focus group allocated to the focus group to provide an input indicating that the task is paused or completed.
O’Gorman teaches performing prompting the one or more participants of the non-focus group allocated to the focus group to provide an input indicating that the task is paused or completed (see ¶ 0083. Prompt indicating that a topic is completed.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify and Iga, Kuhlke, Avida, White and Han to incorporate a prompt to indicate that a task is complete. The modification provides for indicating that a task is complete.
11 Claim(s) 8 is rejected under 35 U.S.C. 103 as being unpatentable over Iga et al. (US 2012/0219140) in view of Kuhlke et al. (US 2008/0320082) in further view of Avida et al. (US 2022/0014568) in further view of White et al. (US 10,742,817) in further view of Panchakshariaiah et al. (US 2023/0199120).
Regarding claim 8, Iga, Kuhlke, Avida, White do not teach the apparatus as claimed claim 5, wherein determining the one or more participants to allocate to the focus group is based, at least partially, on whether a number and/or type of the plurality of interaction inputs received for a participant is equal to or above a threshold.
Panchakshariaiah teaches wherein determining the one or more participants to allocate to the focus group is based, at least partially, on whether a number and/or type of the plurality of interaction inputs received for a participant is equal to or above a threshold (see ¶ 0181-0183. A breakout room having a score as well as participants have a high or low score having a threshold level will be paired based on scores with participants and rooms.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify and Iga, Kuhlke, Avida, White to incorporate a threshold for breakout rooms for participants and rooms. The modification provides for paring the participants in a room with similar scores.
12. Claim(s) 9 is rejected under 35 U.S.C. 103 as being unpatentable over Iga et al. (US 2012/0219140) in view of Kuhlke et al. (US 2008/0320082) in further view of Avida et al. (US 2022/0014568) in further view of Panchakshariaiah et al. (US 2023/0199120).
Regarding claim 9, Iga, Kuhlke, Avida do not teach the apparatus as claimed in claim 1, wherein determining the one or more participants to allocate to the non-focus group is based, at least partially, on whether a number and/or type of the plurality of interaction inputs received for a participant is below a threshold.
Panchakshariaiah (120) teaches wherein determining the one or more participants to allocate to the non-focus group is based, at least partially, on whether a number and/or type of the plurality of interaction inputs received for a participant is below a threshold (see ¶ 0181-0183. A breakout room having a score as well as participants have a high or low score having a threshold level will be paired based on scores with participants and rooms.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify and Iga, Kuhlke, Avida to incorporate a threshold for breakout rooms for participants and rooms. The modification provides for paring the participants in a room with similar scores.
13. Claim(s) 10 is rejected under 35 U.S.C. 103 as being unpatentable over Iga et al. (US 2012/0219140) in view of Kuhlke et al. (US 2008/0320082) in further view of Avida et al. (US 2022/0014568) in further view of Colson et al. (US 9,652,113) in further view of Pandy et al. (US 11,095,468).
Regarding claim 10, Iga, Kuhlke, Avida do not teach the apparatus as claimed in claim 1, wherein the instructions, when executed by the at least one processor, further cause the apparatus to least to perform: when the virtual meeting or presentation is finished providing, to each of the one or more participants, a summary of the virtual meeting or presentation and at least one of: wherein the summary comprises a personalised To-Do list based, at least partially, on an updated To-Do list for the participant and/or an agenda of the virtual meeting or presentation, wherein the summary further comprises a global To-Do list for the virtual meeting or presentation based, at least partially, on an updated personalised To-Do list for one or more of the participants and/or an agenda of the virtual meeting or presentation, and wherein the apparatus further comprises means for performing updating one or more personalised To-Do lists in response to receiving from the one or more participants of the virtual meeting or presentation inputs indicating tasks that are started, in progress, completed or new tasks.
Colson teaches performing, when the virtual meeting or presentation is finished providing, to each of the one or more participants, a summary of the virtual meeting or presentation, wherein the summary comprises a personalised To-Do list based, at least partially, on an updated To-Do list for the participant and/or an agenda of the virtual meeting or presentation, wherein the summary further comprises a global To-Do list for the virtual meeting or presentation based, at least partially, on an updated personalised To-Do list for one or more of the participants and/or an agenda of the virtual meeting or presentation (see col. 8, lines 5-30. At the end of a conferencing session, a summary report including topics and speakers are provided.). Examiner selected claim limitation highlighted in correlation to alternate language “or”
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify and Iga, Kuhlke, Avida to incorporate a session summary of a conference. The modification provides for summarizing the conferencing session.
Pandy teaches optionally, wherein the apparatus further comprises means for performing updating one or more personalised To-Do lists in response to receiving from the one or more participants of the virtual meeting or presentation inputs indicating tasks that are started, in progress, completed or new tasks (see col. 15, lines 31-44. The meeting summary provides the task update when a task is completed.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify and Iga, Kuhlke, Avida and Colson incorporate a session summary of a conference which includes completed tasks. The modification provides for summarizing the conferencing session with the completed tasks.
14. Claim(s) 11 is rejected under 35 U.S.C. 103 as being unpatentable over Iga et al. (US 2012/0219140) in view of Kuhlke et al. (US 2008/0320082) in further view of Avida et al. (US 2022/0014568) in further view of Huang et al. (US 2022/0353100).
Regarding claim 11, Iga, Kuhlke, Avida do not teach the apparatus as claimed in claim 10, wherein, for the one or more participants of the non-focus group, the summary further comprises the audio data, wherein the audio data comprises audio data from at least one of the plurality of time periods.
Huang teaches wherein, for the one or more participants of the non-focus group, the summary further comprises the audio data, wherein the audio data comprises audio data from at least one of the plurality of time periods (see ¶ 0099, 0105. The summary of the video conference having audio transcripts with timestamps.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify and Iga, Kuhlke, Avida incorporate a session summary of the audio data with timestamps. The modification provides for summarizing the conferencing session and having a string of timestamps for the session.
15. Claim(s) 12 is rejected under 35 U.S.C. 103 as being unpatentable over Iga et al. (US 2012/0219140) in view of Kuhlke et al. (US 2008/0320082) in further view of Avida et al. (US 2022/0014568) in further view of Shaffer et al. (US 8,971,511).
Regarding claim 12, Iga, Kuhlke, Avida do not teach the apparatus as claimed in claim 1, wherein the interaction inputs comprise at least one of: speaking time of the one or more participants, and body movement of the one or more participants.
Shaffer teaches wherein the interaction inputs comprise at least one of: speaking time of the one or more participants, and body movement of the one or more participants (see claim 1. Sending an indication to the new active speaker advising that the new active speaker is now among the active speakers. Communicating via an independent communication channel to a plurality of conference call participants that a specific period of time has been allocated to the new active speaker and that the plurality of conference call participants should not attempt to talk during the specific period of time.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify and Iga, Kuhlke, Avida incorporate a speaking time of the participants. The modification provides a time frame allocated for participants.
16. Claim(s) 13 is rejected under 35 U.S.C. 103 as being unpatentable over Iga et al. (US 2012/0219140) in view of Kuhlke et al. (US 2008/0320082) in further view of Avida et al. (US 2022/0014568) in further view of Dhara et al. (US 2020/0134572).
Regarding claim 13, Iga, Kuhlke, Avida do not teach the apparatus as claimed in claim 1, wherein the instructions, when executed by the at least one processor, further cause the apparatus to least to perform: extracting meeting information from an agenda and/or a calendar invitation of the virtual meeting or presentation, wherein determining, for each time period, the one or more participants to allocate to the non- focus group is further based on the meeting information, and wherein the meeting information comprises one or more of: a type of the virtual meeting or presentation; a duration of the virtual meeting or presentation; items of the agenda of the virtual meeting or presentation; tasks and/or participants corresponding to the items of the agenda; and a role of each participant in the virtual meeting or presentation.
Dhara teaches wherein the apparatus further comprises means for performing extracting meeting information from an agenda and/or a calendar invitation of the virtual meeting or presentation, wherein determining, for each time period, the one or more participants to allocate to the non- focus group is further based on the meeting information, and wherein the meeting information comprises one or more of: a type of the virtual meeting or presentation; a duration of the virtual meeting or presentation; items of the agenda of the virtual meeting or presentation; tasks and/or participants corresponding to the items of the agenda; and a role of each participant in the virtual meeting or presentation (see fig. 4-5,¶ 0056. A communication invitation provides subject (topic), location, time (duration), message and documents. This presents participants that are invited to a specific room and time and duration in which the meeting will occur.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify and Iga, Kuhlke, Avida incorporate an invitation for conferencing session. The modification of the invitation provide elements in which detail subject, time, duration, and location.
17. Claim(s) 16, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Iga et al. (US 2012/0219140) in view of Kuhlke et al. (US 2008/0320082) in further view of Avida et al. (US 2022/0014568) in further view of Szafir et al. (US 2021/0094180).
Regarding claim 16, Iga, Kuhlke and Avida do not teach the apparatus as claimed in claim 1, wherein the one or more of the plurality of interaction inputs indicating that focus of the one or more participants has shifted away from the virtual meeting or presentation comprises at least one of: number of body movements indicating that focus of the one or more participants has shifted away from the virtual meeting or presentation, or frequency of body movements indicating that focus of the one or more participants has shifted away from the virtual meeting or presentation.
Szafir teaches wherein the one or more of the plurality of interaction inputs indicating that focus of the one or more participants has shifted away from the virtual meeting or presentation comprises at least one of: number of body movements indicating that focus of the one or more participants has shifted away from the virtual meeting or presentation, or frequency of body movements indicating that focus of the one or more participants has shifted away from the virtual meeting or presentation (see ¶ 0098. The system records the number of times a participant is detracted looking away (eye gazing or head movement.)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify and Iga, Kuhlke, Avida incorporate the number of times was looking away. The modification provides for a count of a number of times that the user looked away.
Regarding claim 17, Iga, Kuhlke and Avida do not teach the apparatus as claimed in claim 16, wherein the body movements comprise at least one of: eye movements, hand movements, head movements, or posture change.
Szafir teaches wherein the body movements comprise at least one of: eye movements, hand movements, head movements, or posture change (see ¶ 0098. The system records the number of times a participant is detracted looking away (eye gazing or head movement).).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify and Iga, Kuhlke, Avida incorporate the number of times was looking away. The modification provides for a count of a number of times that the user looked away.
Conclusion
18. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASSAD MOHAMMED whose telephone number is (571)270-7253. The examiner can normally be reached 9:00AM-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached at 571-272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ASSAD MOHAMMED/ Examiner, Art Unit 2691
/DUC NGUYEN/ Supervisory Patent Examiner, Art Unit 2691