Prosecution Insights
Last updated: April 19, 2026
Application No. 18/792,483

MEETING MINUTES AUTOMATIC GENERATION SYSTEM AND MEETING MINUTES AUTOMATIC GENERATION METHOD

Non-Final OA §101§103
Filed
Aug 01, 2024
Examiner
TENGBUMROONG, NATHAN NARA
Art Unit
2654
Tech Center
2600 — Communications
Assignee
Inventec Appliances Corporation
OA Round
1 (Non-Final)
43%
Grant Probability
Moderate
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 43% of resolved cases
43%
Career Allow Rate
6 granted / 14 resolved
-19.1% vs TC avg
Strong +75% interview lift
Without
With
+75.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
34 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
27.2%
-12.8% vs TC avg
§103
54.3%
+14.3% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
3.2%
-36.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 14 resolved cases

Office Action

§101 §103
DETAILED ACTION This office action is in response to Applicant’s submission filed on 8/01/2024. Claims 1-10 are pending in the application. As such, claims 1-10 have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) was submitted on 9/12/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1, the claim recites “(a) comparison program identifies identities of the plurality of members corresponding to the plurality of voice records”, “(b) the speech-to-text conversion program converts the plurality of voice records into a plurality of text records”, “(c) the meeting minutes generation program determines an importance level of each of the plurality of text records based on an identity of each of the plurality of members”, and “(d) meeting minutes generation program generates an overall meeting minutes based on the plurality of text records and importance levels of the plurality of text records.” Limitations (a) – (d) recite mental processes that may be practically performed in the mind using pen and paper. For example, limitation (a) can be done by someone determining a speaker by listening to a voice record. Limitation (b) can be done by someone transcribing a voice recording. Limitation (c) can be done by someone determining the importance of an utterance based on a speaker’s identity. Limitation (d) can be done by someone generating a meeting summary using text records and corresponding importance levels. Under its broadest reasonable interpretation when read in light of the specification, the actions to “identify,” “convert,” “determine,” and “generate” encompass mental processes practically performed in the human mind by evaluation and judgement using pen and paper or a generic computer. Accordingly, the claim recites an abstract idea (Step 2A, Prong One). The judicial exception is not integrated into a practical application. In particular, the claim recites additional elements of “(e) a voice capturing device electrically connected to the control circuit and capturing a plurality of voice records of a plurality of members”, “(f) a storage device electrically connected to the control circuit”, and “(g) control circuit accesses the storage device to execute a comparison program, a speech-to-text conversion program, and a meeting minutes generation program.” The limitation, (e), is mere data gathering recited at a high level of generality, and thus is insignificant extra-solution activity. In addition, all uses of the recited judicial exception require such data gathering, and, as such, this limitation does not impose any meaningful limits on the claim. This limitation amounts to necessary data gathering. Further, limitations (a) - (e) are recited as being performed by a computer. In limitation (e), the computer is used as a tool to perform the generic computer function of receiving data. In limitations (a) - (d), the computer is used to perform an abstract idea, as discussed above in Step 2A, Prong One, such that it amounts to no more than mere instructions to apply the exception using a generic computer. The limitations (f) – (g) provide nothing more than mere instructions to implement an abstract idea on a generic computer. The different devices/programs recited in limitations (f) – (g) are used to invoke generic computer components as a tool to perform an existing process. Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application (Step 2A, Prong Two: NO), and the claim is directed to an abstract idea (Step 2A: YES). The claim does not include additional elements that are sufficient to amount to more than the judicial exception. As discussed above, the recitation of a computer to perform limitations (a) – (e) amounts to no more than mere instructions to apply the exception using a generic computer component. Also as discussed above, limitation (f) is recited at a high level of generality. This element amounts to receiving speech data, which is well understood, routine, conventional activity, as supported by paragraph [0018] of applicant’s specification. Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and insignificant extra-solution activity, which do not provide an inventive concept (Step 2B). Regarding claim 5, the claim is rejected with similar analysis to claim 1. Similarly, dependent claims 2-4 and 6-10 include additional steps that are considered abstract ideas because they fail to provide meaningful significance that goes beyond generally linking the use of an abstract idea to a particular technological environment and using the computer to perform an abstract idea. Claims 2 and 6 read on someone identifying the speaker corresponding to a text record and tagging their identity. Claims 3 and 7 read on someone using a generic computer to capture and access an image, and recognizing a person. Claims 4 and 8 read on someone generating meeting note using local text records and determining if the local meeting notes are similar to other meeting notes. Claim 9 read on someone determining a type and level for meeting notes. Claim 10 reads on someone determining if a member has sufficient level of authorization and using a generic computer to send them meeting notes to the member if they do. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3 and 5-7 are rejected under 35 U.S.C. 103 as being unpatentable over Asthana et al. (US 20220109585 A1; hereinafter referred to as Asthana) in view of Rainisto (US 20170060828 A1). Regarding claim 1, Asthana teaches: a meeting minutes automatic generation system, comprising: a control circuit ([0058] In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention); a voice capturing device electrically connected to the control circuit and capturing a plurality of voice records of a plurality of members ([0025] meeting notes summary program 106 receives audio input from a microphone (not shown) into which the speaker is speaking. The microphone may be a component of a computing device, for example, a laptop, or of a telephone); a storage device electrically connected to the control circuit (Program instructions and data used to practice embodiments of the present invention, e.g., meeting notes summary program 106, speech to text module 108, meeting notes database 110, and participant profile database 112, are stored in persistent storage 608 for execution and/or access by one or more of the respective processor(s) 604 of server computer 104 via cache 614. Also see Fig. 6.); wherein the control circuit accesses the storage device to execute… a speech-to-text conversion program ([0019] Speech to text module 108 receives spoken words from an audio or video conference call and transcribes the spoken words into written words), and a meeting minutes generation program… ([0018] Meeting notes summary program 106 provides customized summaries of virtual meetings by transcribing speaker utterances, i.e., audible language of a speaker); the speech-to-text conversion program converts the plurality of voice records into a plurality of text records ([0026] Meeting notes summary program 106 converts the speech from the audio input to text (step 204). Meeting notes summary program 106 uses speech to text technology to convert the received audio input into text); the meeting minutes generation program determines an importance level of each of the plurality of text records ([0029] Meeting notes summary program 106 determines the frequency that each phrase is highlighted and assigns a weight (step 210). In an embodiment, meeting notes summary program 106 counts each phrase, sentence, or text segment included in the highlighted text received from the participants to determine the number of times each phrase is highlighted by the participants. Meeting notes summary program 106 assigns a weight to each highlighted phrase based on the determined frequency) based on an identity of each of the plurality of members ([0031] meeting notes summary program 106 normalizes the weighted frequency of each phrase based on information known about the participants… meeting notes summary program 106 may adjust the ranking such that text highlighted by a participant whose role is project manager ranks higher than text highlighted by a participant whose role is test engineer); and the meeting minutes generation program generates an overall meeting minutes based on the plurality of text records ([0018] Meeting notes summary program 106 provides customized summaries of virtual meetings by transcribing speaker utterances, i.e., audible language of a speaker, such that participants can highlight segments of the text considered important and combining the highlighted text to create a representative generative summary of the meeting) and importance levels of the plurality of text records ([0018] Meeting notes summary program 106 determines the frequency that each phrase is highlighted and assigns a weight to the phrase based on the frequency. As used herein, the term frequency refers to a number of times or quantity a phrase, sentence, or other text is highlighted by participants of a virtual meeting. Meeting notes summary program 106 stores the highlighted phrases by participant. Meeting notes summary program 106 normalizes the weighted frequency). Asthana does not explicitly, but Rainisto teaches: a comparison program… ([0019] Image recognition, voice recognition, biometric recognition or a combination thereof may be then utilized to identify the speaker); the comparison program identifies identities of the plurality of members corresponding to the plurality of voice records… ([0022] based on an identity of a speaker, the processor 202 may choose a specific speech recognition profile for that speaker. In an embodiment, the processor 202 may analyze audio from the microphone 203 to recognize and identify the speaker); Asthana and Rainisto are considered analogous in the field of speech processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Asthana to combine the teachings of Rainisto because doing so would allow for the use of speaker profiles for more efficient speech recognition and speech-to-text conversion, leading to more accurate meeting minutes (Rainisto [0041] Speech recognition techniques may be used to affect the conversion of speech to digital text. In an embodiment, a speech recognition profile based upon a speaker's identity may be used for speech to digital text conversion. In step 405, the digital text may be associated with the identity of its speaker. Also a point of time of the speech and respective digital text may be detected and obtained). Regarding claim 2, the combination of Asthana and Rainisto discloses: the meeting minutes automatic generation system according to claim 1. Asthana further teaches: wherein the storage device further stores an identity tagging program, the control circuit accesses the storage device to execute the identity tagging program ([0027] In another embodiment, meeting notes summary program 106 may display the text by speaker identification), and the identity tagging program tags identities of the plurality of members on the plurality of text records ([0026] In an embodiment, meeting notes summary program 106 separates the text such that meeting notes summary program 106 identifies each speaker). Regarding claim 3, the combination of Asthana and Rainisto discloses: the meeting minutes automatic generation system according to claim 1. Rainisto further teaches: further comprising an image capturing device electrically connected to the control circuit ([0026] the present embodiments may be suitable for application in a variety of different types of computing devices which comprise and/or can be coupled to at least one microphone and at least one camera and which may be configured to annotate meeting transcriptions), wherein the storage device further stores a facial recognition program, the image capturing device captures a meeting image ([0029] A camera 201 may capture a video of the meeting and send it to a processor 202 for detection, recognition and identification of participants), and the control circuit accesses the storage device to execute the facial recognition program ([0029] A processor 202 may identify, by facial recognition, a speaker 60 and associate digital text to them). Asthana and Rainisto are considered analogous in the field of speech processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Asthana and Rainisto to further combine the teachings of Rainisto because doing so would allow for the use of a camera to capture user facial features and gestures, leading to more detailed meeting minutes that incorporate a user’s physical actions and improved speaker recognition using facial recognition (Rainisto [0019] The processor 202 may analyze video from the camera 201 and/or audio from the microphone 203 to determine a speaker and the speaker's location with respect to other participants. Object tracking and/or acoustic localization may be used to differentiate a speaker from other participants… Skeletal maps may be analyzed by the processor 202 to detect and recognize gestures. The processor 202 may recognize an initiator as well as at least one target of a gesture from the skeletal maps and an awareness targets' location. A target of a gesture may be a human participant of the meeting or a physical object, for example, a meeting aid). Regarding claim 5, it recites similar limitations as claim 1 and therefore is rejected similarly. Regarding claim 6, it recites similar limitations as claim 2 and therefore is rejected similarly. Regarding claim 7, it recites similar limitations as claim 3 and therefore is rejected similarly. Claims 4 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Asthana in view of Rainisto, as applied to claims 1-3 and 5-7 above, and further in view of Lal et al. (US 20240235872 A1; hereinafter referred to as Lal). Regarding claim 4, the combination of Asthana and Rainisto discloses: the meeting minutes automatic generation system according to claim 1. The combination of Asthana and Rainisto does not explicitly, but Lal teaches: wherein when the meeting minutes automatic generation system is in a mute mode ([0043] the BIP may be determined based on the camera of the user's device being disabled, and/or a microphone of the user's device being muted or disabled, for at least a threshold period of time), the meeting minutes generation program generates a local meeting minutes based on a plurality of local text records in the mute mode ([0054] user's remaining in the virtual meeting during the BIP may have their audio or video or text or any other suitable data or any combination thereof captured locally for local analysis, even if the microphone and/or camera of such device is disabled with respect to the virtual meeting platform), the control circuit determines whether the local meeting minutes are related to the overall meeting minutes ([0055] the summary generator system may determine portions of a virtual meeting to be included in a summary during a BIP for a user based on a combination of such locally monitored reactions (for user devices being muted or having a disabled camera) as well as reactions determined by the central server (for user devices not having the camera and/or microphone turned off, disabled or muted)), and integrates the local meeting minutes into the overall meeting minutes when the local meeting minutes are related to the overall meeting minutes ([0010] a machine learning model trained and deployed locally at a user's device may analyze raw audio or video data (e.g., the microphone or camera of the computing device or an external device may only be capturing audio or video for local use, or other actions may be locally analyzed), with the user's implicit or explicit consent. Parameters indicative of the locally monitored user data such as audio, video, chat, or any other suitable data, or any combination thereof, may be shared by the second computing device with the server, to assist in selecting optimal portions of the virtual meeting to summarize while preserving privacy of the at least one second user). Asthana, Rainisto, and Lal are considered analogous in the field of speech processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Asthana and Rainisto to combine the teachings of Lal because doing so would allow for detailed meeting minutes to be generated without requiring the audio or video participation of every member in the meeting by using local analysis, leading to meeting notes that incorporate input from all participants while respecting user privacy (Lal [0055] one user (e.g., a professor or teacher) may ask other users to disable their video and/or audio to conserve bandwidth during the virtual meeting, or users may decide for privacy reasons to disable their video and/or audio during the virtual meeting. Even though video, audio and/or text of users in these circumstances may not be provided to the central server, the users reactions may be locally analyzed, and an indication of such analysis may be transmitted to the central server). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Asthana in view of Rainisto, as applied to claims 1-3 and 5-7 above, and further in view of Vendrow (US 20230297765 A1). Regarding claim 9, the combination of Asthana and Rainisto discloses: the meeting minutes automatic generation method according to claim 5. The combination of Asthana and Rainisto does not explicitly, but Vendrow teaches: accessing the storage device through the control circuit to execute a keyword comparison program ([0086] the one-to-one keyword matching implemented by the content scoring service 330 may also incorporate the weighted values assigned to the set of relevant keywords to determine relevance scores for sentences), wherein the keyword comparison program identifies a type ([0050] the conference management system 150 may analyze other data associated with the meeting such as, meeting descriptions, meeting invitations, meeting agendas, and meeting invitees, to determine what type of subject matter is relevant to the meeting for the purposes of generating an accurate meeting summary) and a level of the overall meeting minutes ([0126] at step 515 process 500 generates a set of relevant sentences from the plurality of sentences based upon the relevance score assigned to each sentence and a relevance threshold, where the relevance threshold represents a desired level of understanding of content from the meeting session. In an embodiment, the summary generation service 335 receives the plurality of sentences with their associated relevance scores and uses the relevance threshold to select relevant sentences based upon which sentences have a relevance score above the relevance threshold. The relevance threshold may be configured to a specific value based on the user's preferred level of understanding, a historical level of understanding for the overall meeting topic or based on historical user preferences of either the requesting user and/or other users that are similar to the requesting user). Asthana, Rainisto, and Vendrow are considered analogous in the field of speech processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Asthana and Rainisto to combine the teachings of Vendrow because doing so would allow for meeting minutes to be generated based on a user’s desired level of understanding or expertise using keyword comparisons, leading to more customized meeting minutes and improving user experience and flexibility (Vendrow [0040] presently described approaches seek to address the issue of a fixed level of detail provided from automatically generated meeting summaries by implementing intelligent meeting summaries that are tailored each user's desired level of understanding of the meeting content. Determining a user's desired level of understanding of the meeting content may be based on the user's expertise, prior meeting summary requests, and/or any other indicator used to determine the optimal level of detail of a meeting summary for the user). Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Asthana in view of Rainisto and Vendrow, as applied to claims 9 above, and further in view of Shen et al. (US 20190190908 A1; hereinafter referred to as Shen). Regarding claim 10, the combination of Asthana, Rainisto, and Vendrow discloses: the meeting minutes automatic generation method according to claim 9. The combination of Asthana, Rainisto, and Vendrow does not explicitly, but Shen teaches: the control circuit determining whether the identities of the plurality of members correspond to the level ([0031] server 28 may determine which users attended the meeting and/or have the appropriate security authorization based on identification of the user. The user may be identified in many ways. For example, the user may be identified based on a calendar invite, based on a user's schedule, based on recognized images of the user that are captured by camera device 22, based a recognized voice of the user that is captured by microphone device 24, etc.), and wherein when the identities of the plurality of members correspond to the level ([0051] Access control may be performed on a per-meeting basis. In business applications, access control may be performed according to privilege management. For example, a meeting owner/organizer may assign a privilege level to the meeting, which automatically grants access right according to the specific assigned privilege policy), the control circuit sends the overall meeting minutes to the plurality of members ([0028] server 28 may retrieve the attendee and/or user information from database 26, and use the information to aid in performance of the disclosed methods. For example, the information may be used to identify a meeting attendee and/or authorized user, to tag stored data streams inside meeting logs with attendee identification information, and to selectively allow access to the meeting logs based on the identification). Asthana, Rainisto, Vendrow, and Shen are considered analogous in the field of speech processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Asthana, Rainisto, and Vendrow to combine the teachings of Shen because doing so would add a layer of security to meeting minutes by facilitating access control for meeting notes depending on a user’s access level, leading to improved security when distributing confidential meeting materials (Shen [0051] Successful and accurate association of EIDs and BIIDs, in addition to the aforementioned application of tagging meeting participants, may also be leveraged to facilitate access control for the dissemination of meeting logs/notes, via an email system, via a web service, or the like. By default, meeting participants may be granted access. Once a user accesses a shared link of the meeting log, her IDs may be checked against the ID information (either locally carried in the log or stored on a dedicated access control server that maintains the access list of meetings under its management), and the access may then be granted or denied accordingly). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Jones et al. (US 20110270609 A1) – discloses an audio conference system that incorporates a speaker identifier that assigns roles to each speaker and includes a weighting scheme for determining the relevance of participants. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Nathan Tengbumroong whose telephone number is (703)756-1725. The examiner can normally be reached Monday - Friday, 11:30 am - 8:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hai Phan can be reached at 571-272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NATHAN TENGBUMROONG/Examiner, Art Unit 2654 /HAI PHAN/Supervisory Patent Examiner, Art Unit 2654
Read full office action

Prosecution Timeline

Aug 01, 2024
Application Filed
Feb 18, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530536
Mixture-Of-Expert Approach to Reinforcement Learning-Based Dialogue Management
2y 5m to grant Granted Jan 20, 2026
Patent 12451142
NON-WAKE WORD INVOCATION OF AN AUTOMATED ASSISTANT FROM CERTAIN UTTERANCES RELATED TO DISPLAY CONTENT
2y 5m to grant Granted Oct 21, 2025
Patent 12412050
MULTI-PLATFORM VOICE ANALYSIS AND TRANSLATION
2y 5m to grant Granted Sep 09, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
43%
Grant Probability
99%
With Interview (+75.0%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 14 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month