Prosecution Insights
Last updated: April 19, 2026
Application No. 18/668,946

DETERMINATION OF CONFERENCE PARTICIPANT CONTRIBUTION

Non-Final OA §102§DP
Filed
May 20, 2024
Examiner
GAUTHIER, GERALD
Art Unit
2692
Tech Center
2600 — Communications
Assignee
Mitel Networks Corporation
OA Round
1 (Non-Final)
91%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
98%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
1630 granted / 1791 resolved
+29.0% vs TC avg
Moderate +6% lift
Without
With
+6.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
17 currently pending
Career history
1808
Total Applications
across all art units

Statute-Specific Performance

§101
9.5%
-30.5% vs TC avg
§103
30.9%
-9.1% vs TC avg
§102
29.3%
-10.7% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1791 resolved cases

Office Action

§102 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on April 10, 2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Shetty et al. (US 2023/0033595 A1). As to claim 1, Shetty discloses a conferencing system [120 on FIG. 1] for determining a participant's status [User Data 133 on FIG. 1] during a conference [Paragraph 0010], the conferencing system comprising: a conference server [Computing Environment 103 on FIG. 1] configured to identify a participant and topics [Identify affiliation of attendees] on which the participant comments based on data entries [Email domain name] by the participant into a participant device [Client device 109 on FIG. 1] assigned to the participant [“The conference assistant service which is included in the computing environment, groups attendees by their affiliation, email domain name associated with and a group within an organization as specify by the directory service.” Paragraphs 0017-0018]; one or more participant devices [Client device 109 on FIG. 1] in communication with the conference server, wherein each participant device is assigned to a unique participant [User’s name] and wherein the one or more participant devices includes the participant device [“The participant device is assigned uniquely to the user’s name, email address and contact information.” Paragraphs 0017, 0024 and 0025]; a natural language processing (NLP) processor [Natural language processing in 120 on FIG. 1] in communication with the conference server, wherein the NLP processor is configured to identify the participant and one or more topics on which the participant comments based on the participant's speech [“The conference assistant service provide a machine service layer that perform the task of natural language processing. The processor analyzes the transcripts to identify the participants data and affiliations.” Paragraphs 0021, 0025 and 0026]; a topics database [Data Store 121 on FIG. 1] in communication with the conference server and with the NLP processor, wherein the topics database stores one or both of the participant's speech and the participant's data entries made during the conference [“The data store memory stores meting data and user accounts during the meeting session.” Paragraphs 0023-0024]; and a ranking engine [Conference Assistance Service 120 on FIG. 1] in communication with the topics database, wherein the ranking engine is configured to identify one or both of (a) whether the participant was an originator of an idea [Potential distraction on the meeting], and (b) the participant's relative rank in contribution [The user is speaking a different language] to a topic [“The conference assistant service performs an analysis of the audio and video component to detect a potential distraction from an attendee of the event and identifies the user that speaks a different language as topic of the meeting.” Paragraphs 0043 and 0049]; wherein the conference server alerts at least one of the one or more participant devices when an inconsistency from the conference topic is detected [“The conferencing assistant service performs an analysis of the component of the participant (attendees) detect a potential distraction (inconsistency) and perform an image recognition (alert).” Paragraph 0043]. As to claim 2, Shetty discloses the conferencing system of claim 1 that further includes one or more cameras [Camera associated with a client device 109 on FIG. 1] in communication with the conferencing server, wherein the one or more cameras are in communication with the conferencing server and the conferencing server is further configured to identify the participant based on the participant's appearance [“The conference assistant service perform an image recognition analysis to identify attendee eyes away from a camera associated with a client device.” Paragraphs 0043 and 0050]. As to claim 3, Shetty discloses the conferencing system of claim 1, wherein the topics database further stores the starting time and duration of the one or both of the participant's speech and the participant's data entries made during the conference [“The conferencing assistant service identifies a late arriving attendee to an event by comparing the stored data in the database of the start time of the meeting and the actual time of the arriving attendee in the meeting. It allows the participant to recap of a previous time period of the meeting.” Paragraphs 0018, 0023-0024]. As to claim 4, Shetty discloses the conferencing system of claim 1, that further includes one or more microphones in communication with NLP processor, wherein the one or more microphones are configured to receive and transmit the participant’s speech to the NLP processor [The microphones receive the participant ‘s speech and converted speech to text to transmit to the NLP for further processing.” Paragraphs 0010 and 0026]. As to claim 5, Shetty discloses the conferencing system of claim 1, wherein one or both of the NLP processor and the conference server is further configured to identify one or both of synonymous and semantically-similar (a) words, and (b) phrases [“The NLP identifies the language being spoken by the user by grouping the words are used to identify language.” Paragraph 0047]. As to claim 6, Shetty discloses the conferencing system of claim 1, wherein one or both of the NLP processor and the conference server is further programmed to translate words from one language into another language [“The NLP includes a translation library to identify the language being spoken by the user.” Paragraph 0047]. As to claim 7, Shetty discloses the conferencing system of claim 1, wherein the conference server is further configured to determine a participant's contribution to the topic by analyzing the content and length of one or both of the participant's speech and data entries [“The conference assistant service identifies affiliation of attendees base on content of the event in the conference system for groupings of attendees.” Paragraph 0018]. As to claim 8, Shetty discloses the conferencing system of claim 1, wherein the conference server directs at least one of the one or more participant devices to display one or both of the name and image of the originator when the originator's idea is discussed [“The client devices includes a display that will allow the user upon user interface generated by an application to display video of the originator.” Paragraph 0071]. As to claim 9, Shetty discloses a method for determining a participant's status during a conference [Paragraph 0002], the method comprising: identifying by a conferencing server a participant and topics on which the participant comments based on data entries by the participant in a participant device assigned to the participant [“The conference assistant service which is included in the computing environment, groups attendees by their affiliation, email domain name associated with and a group within an organization as specify by the directory service.” Paragraphs 0017-0018]; identifying, by a natural language processing (NLP) processor in communication with the conference server, the participant and topics on which the participant comments based on the participant’s speech [“The participant device is assigned uniquely to the user’s name, email address and contact information.” Paragraphs 0017, 0024 and 0025]; storing one or both of the participant’s speech and the participant’s data entries in a topics database that is in communication with the conference server [“The data store memory stores meting data and user accounts during the meeting session.” Paragraphs 0023-0024]; determining, by utilizing a ranking engine in communication with the topics database, one or both of (a) whether a participant was an originator of an idea, and (b) the participant’s relative rank in contribution to a topic based on analyzing the content and length of one or both of the participant’s speech or data entries [The user is speaking a different language] to a topic [“The conference assistant service performs an analysis of the audio and video component to detect a potential distraction from an attendee of the event and identifies the user that speaks a different language as topic of the meeting.” Paragraphs 0043 and 0049]; and the conference server identifying inconsistencies in the ranking engine identifying the originator of a topic [“The conferencing assistant service performs an analysis of the component of the participant (attendees) detect a potential distraction (inconsistency).” Paragraph 0043]; wherein the conference server alerts at least one of the one or more participant devices when an inconsistency from the conference topic is detected [“The conferencing assistant service performs an analysis of the component of the participant (attendees) detect a potential distraction (inconsistency) and performs an image recognition (alert).” Paragraph 0043]. As to claim 10, Shetty discloses the method of claim 9 that further includes the step of the conference server communicating one or both of the name and image of the originator of the idea to at least one of the one or more participant devices when the idea is being discussed [“The client devices includes a display that will allow the user upon user interface generated by an application to display video of the originator.” Paragraph 0071]. As to claim 11, Shetty discloses the method of claim 9 that further includes the step of saving, in a memory in communication with the conference server, one or both of (a) the originator of at least one idea, and (b) the participant's relative rank in contribution to a topic [“The data store stores user information such as primary language, past breakrooms created within the conference service.” Paragraph 0022]. As to claim 12, Shetty discloses the method of claim 9 that further includes the step of the conference server identifying inconsistencies in the ranking engine identifying the originator of a topic [“The user might be heard speaking in his or her native language to someone in the room, somewhat inconsistent from the English language.” Paragraph 0011]. As to claim 13, Shetty discloses the method of claim 9, wherein the inconsistencies include stated ownership of the idea by participants other than the originator of the idea [“The user might be heard speaking in his or her native language to someone in the room, somewhat inconsistent from the English language.” Paragraph 0011]. As to claim 14, Shetty discloses the method of claim 12, wherein the conference server alerts at least one of the one or more participant devices when an inconsistency from the conference topic is detected [“The user might be heard speaking in his or her native language to someone in the room, somewhat inconsistent from the English language.” Paragraph 0011]. As to claim 15, Shetty discloses a conferencing system [FIG. 1] for determining a participant's status during a conference, the conferencing system comprising: a tangible, non-transitory memory configured to communicate with a conference server, wherein the tangible, non-transitory memory comprises instructions stored thereon that [Paragraph 0069], in response to execution by the conference server, cause the conference server to at least identify a participant and topics on which the participant comments based on data entries by the participant in a participant device assigned to the participant [“The conference assistant service which is included in the computing environment, groups attendees by their affiliation, email domain name associated with and a group within an organization as specify by the directory service.” Paragraphs 0017-0018]; one or more participant devices in communication with the conference server, wherein each of the one or more participant devices is assigned to a unique participant and the one or more participant devices includes the participant device assigned to the participant [“The participant device is assigned uniquely to the user’s name, email address and contact information.” Paragraphs 0017, 0024 and 0025]; a natural language processing (NLP) processor in communication with the conference server, wherein the NLP processor is configured to identify the participant and topics on which the participant comments based on the participant’s speech [“The conference assistant service provide a machine service layer that perform the task of natural language processing. The processor analyzes the transcripts to identify the participants data and affiliations.” Paragraphs 0021, 0025 and 0026]; a topics database in communication with the conference server and the NLP processor, wherein the topics database stores one or both of the participant’s speech and the participant’s data entries made during the conference [“The data store memory stores meting data and user accounts during the meeting session.” Paragraphs 0023-0024]; and a ranking engine in communication with the topics database, wherein the ranking engine is configured to identify which participant was the originator of an idea [“The conference assistant service performs an analysis of the audio and video component to detect a potential distraction from an attendee of the event and identifies the user that speaks a different language as topic of the meeting.” Paragraphs 0043 and 0049]; wherein the conference server alerts at least one of the one or more participant devices when an inconsistency from the conference topic is detected [“The conferencing assistant service performs an analysis of the component of the participant (attendees) detect a potential distraction (inconsistency) and perform an image recognition (alert).” Paragraph 0043]. As to claim 16, Shetty discloses the conferencing system of claim 1, wherein the topics database further stores the starting time and duration of each occurrence of at least one of the one or both of the participant's speech and the participant's data entries [“The conference assistant service define a period of time which include the starting time and the duration of the event in other to recap the previous conversation to the absent user.” Paragraph 0066]. As to claim 17, Shetty discloses the conferencing system of claim 15, wherein the conference server and the NLP processor are each further programmed to identify synonymous and semantically-similar terms [“The NLP identifies the language being spoken by the user by grouping the words are used to identify language.” Paragraph 0047]. As to claim 18, Shetty discloses the conferencing system of claim 15, wherein the ranking engine is further configured to identify the length of time the participant spoke about and sent data about the topic [“The period of time of the event and the event are recapped to the user that missing portion of the event.” Paragraph 0066]. As to claim 19, Shetty discloses the conferencing server of claim 15, wherein one or both of the conference server and the NLP processor are further configured to filter out false positive comments about the topic [“The user might be heard speaking in his or her native language to someone in the room, somewhat inconsistent from the English language.” Paragraph 0011]. As to claim 20, Shetty discloses the conferencing system of claim 15, wherein the ranking engine further provides dynamic status updates based on one or both of the speech and data being used during the conference to identify the originator of the idea each time the idea is discussed [“The client devices includes a display that will allow the user upon user interface generated by an application to display video of the originator.” Paragraph 0071]. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-17 of U.S. Patent No. 12,008,997 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because at least one claim of the instant application is being taught by the claims of the U.S. Patent. Patented claim 1 recites a conferencing system for determining a participant's status during a conference which perform the feature of wherein the ranking engine is configured to identify one or both of (a) whether the participant was an originator of an idea, and (b) the participant's relative rank in contribution to a topic. The pending claim 1 recites a conferencing system for determining a participant's status during a conference which perform the similar feature of wherein the ranking engine is configured to identify one or both of (a) whether the participant was an originator of an idea, and (b) the participant's relative rank in contribution to a topic. Pending claims 2-20 have similar limitations comparing the patented claims 2-17 as shown on the table below. Pending claims Patented claims 1. A conferencing system for determining a participant's status during a conference, the conferencing system comprising: a conference server configured to identify a participant and topics on which the participant comments based on data entries by the participant into a participant device assigned to the participant; one or more participant devices in communication with the conference server, wherein each participant device is assigned to a unique participant and wherein the one or more participant devices includes the participant device; a natural language processing (NLP) processor in communication with the conference server, wherein the NLP processor is configured to identify the participant and one or more topics on which the participant comments based on the participant's speech; a topics database in communication with the conference server and with the NLP processor, wherein the topics database stores one or both of the participant's speech and the participant's data entries made during the conference; and a ranking engine in communication with the topics database, wherein the ranking engine is configured to identify one or both of (a) whether the participant was an originator of an idea, and (b) the participant's relative rank in contribution to a topic; wherein the conference server alerts at least one of the one or more participant devices when an inconsistency from the conference topic is detected. 2. The conferencing system of claim 1 that further includes one or more cameras in communication with the conferencing server, wherein the one or more cameras are in communication with the conferencing server and the conferencing server is further configured to identify the participant based on the participant's appearance. 3. The conferencing system of claim 1, wherein the topics database further stores the starting time and duration of the one or both of the participant's speech and the participant's data entries made during the conference. 4. The conferencing system of claim 1, that further includes one or more microphones in communication with NLP processor, wherein the one or more microphones are configured to receive and transmit the participant's speech to the NLP processor. 5. The conferencing system of claim 1, wherein one or both of the NLP processor and the conference server is further configured to identify one or both of synonymous and semantically-similar (a) words, and (b) phrases. 6. The conferencing system of claim 1, wherein one or both of the NLP processor and the conference server is further programmed to translate words from one language into another language. 7. The conferencing system of claim 1, wherein the conference server is further configured to determine a participant's contribution to the topic by analyzing the content and length of one or both of the participant's speech and data entries. 8. The conferencing system of claim 1, wherein the conference server directs at least one of the one or more participant devices to display one or both of the name and image of the originator when the originator's idea is discussed. 9. A method for determining a participant's status during a conference, the method comprising: identifying by a conferencing server a participant and topics on which the participant comments based on data entries by the participant in a participant device assigned to the participant; identifying, by a natural language processing (NLP) processor in communication with the conference server, the participant and topics on which the participant comments based on the participant's speech; storing one or both of the participant's speech and the participant's data entries in a topics database that is in communication with the conference server; determining, by utilizing a ranking engine in communication with the topics database, one or both of (a) whether a participant was an originator of an idea, and (b) the participant's relative rank in contribution to a topic based on analyzing the content and length of one or both of the participant's speech or data entries; and the conference server identifying inconsistencies in the ranking engine identifying the originator of a topic; wherein the conference server alerts at least one of the one or more participant devices when an inconsistency from the conference topic is detected. 10. The method of claim 9 that further includes the step of the conference server communicating one or both of the name and image of the originator of the idea to at least one of the one or more participant devices when the idea is being discussed. 11. The method of claim 9 that further includes the step of saving, in a memory in communication with the conference server, one or both of (a) the originator of at least one idea, and (b) the participant's relative rank in contribution to a topic. 12. The method of claim 9, wherein one or both of the conference server and the NLP processor are further configured to filter out false positive comments about the topic. 13. The method of claim 9, wherein the inconsistencies include stated ownership of the idea by participants other than the originator of the idea. 14. The method of claim 12, wherein the conference server alerts at least one of the one or more participant devices when an inconsistency from the conference topic is detected. 15. A conferencing system for determining a participant's status during a conference, the conferencing system comprising: a tangible, non-transitory memory configured to communicate with a conference server, wherein the tangible, non-transitory memory comprises instructions stored thereon that, in response to execution by the conference server, cause the conference server to at least identify a participant and topics on which the participant comments based on data entries by the participant in a participant device assigned to the participant; one or more participant devices in communication with the conference server, wherein each of the one or more participant devices is assigned to a unique participant and the one or more participant devices includes the participant device assigned to the participant; a natural language processing (NLP) processor in communication with the conference server, wherein the NLP processor is configured to identify the participant and topics on which the participant comments based on the participant's speech; a topics database in communication with the conference server and the NLP processor, wherein the topics database stores one or both of the participant's speech and the participant's data entries made during the conference; and a ranking engine in communication with the topics database, wherein the ranking engine is configured to identify which participant was the originator of an idea; wherein the conference server alerts at least one of the one or more participant devices when an inconsistency from the conference topic is detected. 16. The conferencing system of claim 1, wherein the topics database further stores the starting time and duration of each occurrence of at least one of the one or both of the participant's speech and the participant's data entries. 17. The conferencing system of claim 15, wherein the conference server and the NLP processor are each further programmed to identify synonymous and semantically-similar terms. 18. The conferencing system of claim 15, wherein the ranking engine is further configured to identify the length of time the participant spoke about and sent data about the topic. 19. The conferencing server of claim 15, wherein one or both of the conference server and the NLP processor are further configured to filter out false positive comments about the topic. 20. The conferencing system of claim 15, wherein the ranking engine further provides dynamic status updates based on one or both of the speech and data being used during the conference to identify the originator of the idea each time the idea is discussed. 1. A conferencing system for determining a participant's status during a conference, the conferencing system comprising: a conference server configured to identify a participant and topics on which the participant comments based on data entries by the participant into a participant device assigned to the participant; one or more participant devices in communication with the conference server, wherein each participant device is assigned to a unique participant and wherein the one or more participant devices includes the participant device; a natural language processing (NLP) processor in communication with the conference server, wherein the NLP processor is configured to identify the participant and one or more topics on which the participant comments based on the participant's speech; a topics database in communication with the conference server and with the NLP processor, wherein the topics database stores one or both of the participant's speech and the participant's data entries made during the conference; and a ranking engine in communication with the topics database, wherein the ranking engine is configured to identify one or both of (a) whether the participant was an originator of an idea, and (b) the participant's relative rank in contribution to a topic; wherein one or both of the conference server and the NLP processor are further configured to filter out false positive comments about the topic. 2. The conferencing system of claim 1 that further includes one or more cameras in communication with the conferencing server, wherein the one or more cameras are in communication with the conferencing server and the conferencing server is further configured to identify the participant based on the participant's appearance. 3. The conferencing system of claim 1, wherein the topics database further stores the starting time and duration of the one or both of the participant's speech and the participant's data entries made during the conference. 4. The conferencing system of claim 1, that further includes one or more microphones in communication with NLP processor, wherein the one or more microphones are configured to receive and transmit the participant's speech to the NLP processor. 5. The conferencing system of claim 1, wherein one or both of the NLP processor and the conference server is further configured to identify one or both of synonymous and semantically-similar (a) words, and (b) phrases. 6. The conferencing system of claim 1, wherein one or both of the NLP processor and the conference server is further programmed to translate words from one language into another language. 7. The conferencing system of claim 1, wherein the conference server is further configured to determine a participant's contribution to the topic by analyzing the content and length of one or both of the participant's speech and data entries. 8. The conferencing system of claim 1, wherein the conference server directs at least one of the one or more participant devices to display one or both of the name and image of the originator when the originator's idea is discussed. 9. The conferencing system of claim 1, wherein the topics database further stores the starting time and duration of each occurrence of at least one of the one or both of the participant's speech and the participant's data entries. 10. A method for determining a participant's status during a conference, the method comprising: identifying by a conferencing server a participant and topics on which the participant comments based on data entries by the participant in a participant device assigned to the participant; identifying, by a natural language processing (NLP) processor in communication with the conference server, the participant and topics on which the participant comments based on the participant's speech; storing one or both of the participant's speech and the participant's data entries in a topics database that is in communication with the conference server; determining, by utilizing a ranking engine in communication with the topics database, one or both of (a) whether a participant was an originator of an idea, and (b) the participant's relative rank in contribution to a topic based on analyzing the content and length of one or both of the participant's speech or data entries; and the conference server identifying inconsistencies in the ranking engine identifying the originator of a topic, wherein the inconsistencies include stated ownership of the idea by participants other than the originator of the idea. 11. The method of claim 10 that further includes the step of the conference server communicating one or both of the name and image of the originator of the idea to at least one of the one or more participant devices when the idea is being discussed. 12. The method of claim 10 that further includes the step of saving, in a memory in communication with the conference server, one or both of (a) the originator of at least one idea, and (b) the participant's relative rank in contribution to a topic. 13. The method of claim 10, wherein the conference server alerts at least one of the one or more participant devices when an inconsistency from the conference topic is detected. 14. A conferencing system for determining a participant's status during a conference, the conferencing system comprising: a tangible, non-transitory memory configured to communicate with a conference server, wherein the tangible, non-transitory memory comprises instructions stored thereon that, in response to execution by the conference server, cause the conference server to at least identify a participant and topics on which the participant comments based on data entries by the participant in a participant device assigned to the participant; one or more participant devices in communication with the conference server, wherein each of the one or more participant devices is assigned to a unique participant and the one or more participant devices includes the participant device assigned to the participant; a natural language processing (NLP) processor in communication with the conference server, wherein the NLP processor is configured to identify the participant and topics on which the participant comments based on the participant's speech; a topics database in communication with the conference server and the NLP processor, wherein the topics database stores one or both of the participant's speech and the participant's data entries made during the conference; and a ranking engine in communication with the topics database, wherein the ranking engine is configured to identify which participant was the originator of an idea; wherein one or both of the conference server and the NLP processor are further configured to filter out false positive comments about the topic. 15. The conferencing system of claim 14, wherein the conference server and the NLP processor are each further programmed to identify synonymous and semantically-similar terms. 16. The conferencing system of claim 14, wherein the ranking engine is further configured to identify the length of time the participant spoke about and sent data about the topic. 17. The conferencing system of claim 14, wherein the ranking engine further provides dynamic status updates based on one or both of the speech and data being used during the conference to identify the originator of the idea each time the idea is discussed. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892 form. Kaye et al. (US 9,787,847 B2) discloses generating a roster of meeting participants comprising meeting invitees; accessing calendaring information concerning the teleconference meeting. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GERALD GAUTHIER whose telephone number is (571)272-7539. The examiner can normally be reached 8:00 AM to 4:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, CAROLYN R EDWARDS can be reached on (571) 270-7136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GERALD GAUTHIER/Primary Examiner, Art Unit 2692 February 4, 2026 /CAROLYN R EDWARDS/Supervisory Patent Examiner, Art Unit 2692
Read full office action

Prosecution Timeline

May 20, 2024
Application Filed
Feb 04, 2026
Non-Final Rejection — §102, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604148
AUDIO PROCESSING USING EAR-WEARABLE DEVICE AND WEARABLE VISION DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12602197
CONFIGURATION OF PLATFORM APPLICATION WITH AUDIO PROFILE OF A USER
2y 5m to grant Granted Apr 14, 2026
Patent 12596522
ARTIFICIAL REALITY BASED DJ SYSTEM, METHOD AND COMPUTER PROGRAM IMPLEMENTING A SCRATCHING OPERATION OR A PLAYBACK CONTROL OPERATION
2y 5m to grant Granted Apr 07, 2026
Patent 12597435
SIGNAL PROCESSING APPARATUS AND SIGNAL PROCESSING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12598411
HEARING DEVICE COMPRISING A PARTITION
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
91%
Grant Probability
98%
With Interview (+6.5%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 1791 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month