Prosecution Insights
Last updated: April 19, 2026
Application No. 18/337,620

SYSTEM AND METHOD FOR NATURAL LANGUAGE BASED COMMAND RECOGNITION

Non-Final OA §103
Filed
Jun 20, 2023
Examiner
AGAHI, DARIOUSH
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Verizon Patent and Licensing Inc.
OA Round
3 (Non-Final)
86%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
142 granted / 166 resolved
+23.5% vs TC avg
Strong +29% interview lift
Without
With
+29.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
27 currently pending
Career history
193
Total Applications
across all art units

Statute-Specific Performance

§101
25.8%
-14.2% vs TC avg
§103
47.8%
+7.8% vs TC avg
§102
10.0%
-30.0% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 166 resolved cases

Office Action

§103
DETAILED ACTION This office action is in response to Applicant’s RCE submission filed on 12/29/2025. Claims 1, 3-5, 8, 10-12, 15, 17, 18, 20 were amended. Claims 2, 9, and 16 were canceled. Claims 1, 3-8, 10-15, 17-20 are pending in the application of which Claims 1, 8, and 15 are independent and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on 12/29/2025 has been entered. Response to Arguments Applicant’s arguments filed in the Amendment filed 12/29/2025 (herein “Amendment”) with respect to claim objection raised in the previous office action have been fully considered, and they are persuasive. Therefore, the claim objection of various claims is withdrawn. Applicant's amendments filed with respect to the 35 USC 101 rejections raised in the previous office action have been fully considered and are persuasive. The claimed invention, as currently amended, overcomes the 35 USC 101 rejections. Therefore, the 35 USC 101 rejections are withdrawn. Applicant’s arguments filed in the Amendment filed with respect to the 35 USC §102 rejection raised in the previous office action have been fully considered but are moot in view of the new grounds of rejection which was necessitated by applicant’s amendment. Therefore, the previous rejection has been withdrawn. However, upon further consideration, a new ground of rejection is introduced for independent claims further adding Attwater et al. (US20230244855A1), to the Pandey along with updated recitation from Pandey. Please see prior art section below for more detail including updated citations and obviousness rationale. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 7-8, 14-15, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Pandey (US11095468), and in further view of Attwater et al. (US 20230244855A1)(herein " Attwater"). Pandey was applied in the previous Office Action. Regarding claims 1, 8, and 15 Pandey teaches [A method comprising: - claim 1], [A non-transitory computer-readable storage medium for storing instructions executable by a processor, the instructions comprising: - claim 8], and [A device comprising a processor configured to: - claim 15] ( Pandey, Col. 25, ll. 31-38:” According to examples, the computer 900 has access to computer-readable storage media storing computer-executable instructions which, when executed by the computer 900, perform the various processes described above with regard to FIGS. 1-9. The computer 900 can also include computer-readable storage media for performing any of the other computer-implemented operations described herein.”, and Col. 7, ll. 28—30:” These resources comprise one or more processors and computer-readable storage media executable on the processors.”, and Col. 24, ll. 59-62:” … computer-readable storage media is any available media that provides for the non-transitory storage of data and that can be accessed by the computer 900.”, and Col. 23, ll. 57-67:” The chipset 906 provides an interface between the CPUs 904 and the remainder of the components and devices on the baseboard 902. The chipset 906 can provide an interface to a RAM 908, used as the main memory in the computer 900. The chipset 906 can further provide an interface to a computer-readable storage medium such as a read-only memory (“ROM”) 910 or non-volatile RAM (“NVRAM”) for storing basic routines that help to startup the computer 900 and to transfer information between the various components and devices.”) receiving, from a user equipment (UE), a natural language (NL) user input from a user, the user input including a command related to a multi-party communication (MPC), (Pandey, Col. 6, ll. 24-37:” The meeting device [user equipment]114 may be one or more devices, such as but not limited to a smart phone, a smart watch, a personal computer (“PC”), desktop workstation, laptop computer, tablet computer, notebook computer, personal digital assistants (“PDA”), …, or any other type of computing device capable of connecting to the network 112 and communicating with the meeting system 102. … communicate with one or more other devices to receive voice commands from users and/or perform processing related to functionality of the meeting system 102.”, and Col. 9, ll. 42-45:” … the virtual assistant is configured to understand natural language voice commands and complete tasks for the user, such as acting as interacting with the meeting summary service.”) the command including an MPC identifier corresponding to the MPC; (Pandey, Col. 19, ll. 37 – 42:” a user 110 may provide a voice command that requests the meeting summary service 120 to attend a meeting and generate meeting notes, the meeting summary service 120 may receive a request via a meeting invitation, the meeting summary service 120 may receive a request via a user interface (UI), or the like.”, and Col. 20, ll. 11-16:” The meeting notes generated by the meeting summary service 120 can include a variety of information such as participant information (e.g., who attended the meeting, where the participants were located, . . . ), meeting information (e.g., time, place, location, . . . ) meeting agenda information …”, and Col. 16, ll. 30-37:” … action item 1 was assigned at meeting M1, and action item 2 was assigned at meeting M3. When an individual meeting, such as meeting M1 302A(1) is selected, the action items 310 for that particular meeting may be just shown. In this way, the user may easily access meeting notes and information associated with more than one meeting within GUI 300, or some other UI.”) Note: Pandey teaches command to request meeting notes generation where meeting notes includes participant information among other things. Furthermore, meeting identifier such as meeting M1, once selected, contains relevant information regarding the specific meeting. Someone skilled in the art can use the meeting identifier to obtain necessary information. extracting the MPC identifier from the command, the extraction comprising executing a trained command recognition model: (Pandey, Col. 3, ll. 63-66:” … any type of machine learning model/technique and/or predictive model may be used to determine, calculate, generate [extract], predict, etc., the data (e.g., the meeting notes) described herein.”, and Col. 3, ll. 52-57:” For example, a machine learning mechanism may build, modify or otherwise utilize a model that is created from example inputs and makes classifications, predictions or decisions using the model. The model may be trained using supervised and/or unsupervised learning.”, and Col. 20, ll. 11-16:” The meeting notes generated by the meeting summary service 120 can include a variety of information such as participant information (e.g., who attended the meeting, where the participants were located, . . . ), meeting information (e.g., time, place, location, . . . ) meeting agenda information …”, and Col. 16, ll. 30-37:” … action item 1 was assigned at meeting M1, and action item 2 was assigned at meeting M3. When an individual meeting, such as meeting M1 302A(1) is selected, the action items 310 for that particular meeting may be just shown. In this way, the user may easily access meeting notes and information associated with more than one meeting within GUI 300, or some other UI.”) Note: Pandey teaches command to request meeting notes generation where meeting notes includes participant information among other things. Furthermore, meeting identifier such as meeting M1, once selected, contains relevant information regarding the specific meeting. Someone skilled in the art can use the meeting identifier to obtain necessary information. obtaining, from a database, based on the extracted MPC identifier, MPC data associated with the MPC, the MPC data including at least one of audio data or video data; (Pandey, Col. 8, ll. 6-9:” … the meeting summary service 120 provides recording data 128 to the voice service 122 and/or the transcription service 124 to generate a transcript of the meeting.”, and Col. 10, ll. 2-4:” … service 120 causes the meeting to be recorded (e.g., audio and/or video) via the media device 114 or obtains a recording of the meeting from a different source ....”, and Col. 13, line 66 – Col. 14, line 2:”… the meeting summarizer manager 202 stores recording data 128 associated with the recording of the meeting within data store [database] 126. The recording data 128 may include audio and/or video data.”, and Col. 16, ll. 30-37:” … action item 1 was assigned at meeting M1, and action item 2 was assigned at meeting M3. When an individual meeting, such as meeting M1 302A(1) is selected, the action items 310 for that particular meeting may be just shown. In this way, the user may easily access meeting notes and information associated with more than one meeting within GUI 300, or some other UI.”) Note: Pandey teaches storing relevant meeting data, and once a meeting identifier such as meeting M1, selected, it retrieves relevant information regarding the specific meeting. Someone skilled in the art can use the meeting identifier to retrieve stored information. determining, based on the user input and further execution of the trained command recognition model, a candidate action related to the MPC; (Pandey, Col. 3, ll. 47-57:” According to some examples, machine learning mechanisms may be utilized to identify, or assist in identifying, the action items, generating summary information for the meeting and/or different portions of the meeting, and the like. The term “machine learning” may refer to one or more programs that learns from the data it receives. For example, a machine learning mechanism may build, modify or otherwise utilize a model that is created from example inputs and makes classifications, predictions or decisions using the model. The model may be trained using supervised and/or unsupervised learning.”, and Col. 10, ll. 2-11:” … service 120 causes the meeting to be recorded (e.g., audio and/or video) via the media device 114 or obtains a recording of the meeting from a different source (e.g., from a meeting platform as discussed above). The meeting summary service may utilize the transcription service 124 to generate a machine transcription of the meeting from the recording data 128. The meeting summary service 120 may then generate the meeting notes by extracting highlights and actionable insights [candidate action] from the transcript data 134 of the meeting.”, and Col. 10, ll. 15-22:” … meeting summary service 120 to identify action items. ... In some examples, the action words may be stored as meeting data 132. In some configurations, the transcript data 134 for the meeting is parsed by the meeting summary service 120 to identify occurrences of the predefined list of action words. “) generating, based on the user input and the MPC data, an action output corresponding to the candidate action, (Pandey, Col. 10, ll. 6-11:” … The meeting summary service may utilize the transcription service 124 to generate a machine transcription of the meeting from the recording data 128. The meeting summary service 120 may then generate the meeting notes by extracting highlights and actionable insights [candidate action] from the transcript data 134 of the meeting.”, and Col. 10, ll. 15-22:” … meeting summary service 120 to identify action items. ... In some examples, the action words may be stored as meeting data 132. In some configurations, the transcript data 134 for the meeting is parsed by the meeting summary service 120 to identify occurrences of the predefined list of action words.“) Note: Action items/words represent action output which are based on the meeting notes which represent the user input. providing, to the UE, the action output to be presented to the user, and (Pandey, Col. 12, ll. 27-37:” The meeting summary service 120 may also provide a way for users 110 to interact with the action items/tasks generated from the meeting. … The meeting summary service 120 may also make the tasks viewable by the users 110 that attended the meeting, and/or other authorized users. For example, a user 110 may utilize a user interface to view tasks assigned to them, as well as view other tasks assigned to other users 110. “, and col. 5, ll. 58-60:” A user 110 of the meeting system 102 can utilize a user interface, or some other input device, to access the meeting system 102 through a network 112.”) causing execution, on the UE, of the action output in accordance with the command of the user input. (Pandey, Col. 8, ll. 13-17:” … the meeting device 114 may act as an input device for the meeting summary service 120 for users, such as users 110. A user, such as users 110, may interact with the meeting device 114 to access functionality of the meeting system 102 using voice commands.”, and Col. 9, ll. 5 – 9:” For instance, one of the users 110A, 110B, 110C, or 110D may say to a meeting summary service 120 “Summarizer, join the meeting” or “Summarizer, record the meeting”. Once joined, the meeting summary service 120 can record the meeting, …”, and Col. 15, ll. 31-41:”… the meeting summary service 120 may identify action items (e.g., a follow-up meeting is to be scheduled, a task is to be assigned to one or more users, . . . ). In some examples, the meeting summary service 120 can utilize the meeting summarizer manager 202 to provide an option (e.g. within user interface 216) to view and edit (e.g., change the task to completed, remove the task, update the task), schedule a follow-up meeting (e.g., book one or more conference rooms), send a meeting invitation to the participants with the agenda topic(s) to be covered in the follow-up meeting, and the like.“) Pandey, does not teach, however, Attwater teaches the generation comprising executing a trained generative model; and (Attwater, Par. 0167:” g. generate 2908 from the short text multiple summary sentences, such as by using with a beam search decoding strategy of a deep learning generative model trained with datasets of long texts and summaries, and optionally including one or more narrative structures 2912; “, and Par. 0168:” h. identify 2910 from the multiple summary sentences the one summary sentence that follows a best or a preferred summary format (e.g., for client intents, summaries that describe the action relative to the client may be preferred, “The client has requested a new card”). Attwater is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pandey further in view of Attwater to generate an action output by executing a trained generative model. Motivation to do so would provide improvements for automatically generating a summary note related to a digitally-recorded interlocutor conversation (Attwater, Par. 0009). Regarding claims 7, 14, and 19, Pandey, as modified above, teaches the method, the medium, and the device claims of 1, 8, and 15 respectively. Pandey, as modified above, further teaches receiving a second user input from the user, the second user input including a follow-on command related to the action output; and (Pandey, Col. 3, ll. 8-18: As described herein, the virtual assistant is configured to understand natural language voice commands and complete tasks for the user, such as interacting with the meeting summary service. For instance, a user may search the meeting notes using voice, and/or cause a portion of the meeting notes to be presented to the user (e.g., via a speaker and/or display) [note: this is the result of the first command]. As another example, the user may ask the meeting summary service: “Summarizer, what were the highlights of the meeting?” or “Summarizer, what are my action items from the meeting?” In response, the meeting summary service may provide the requested information.”) Note the follow-on command is Summarizer, what were the highlights of the meeting? performing the follow-on command with respect to the action output. (Pandey, Col. 3, ll. 15-18: As another example, the user may ask the meeting summary service: “Summarizer, what were the highlights of the meeting?” or “Summarizer, what are my action items from the meeting?” In response, the meeting summary service may provide the requested information.”) Note providing the requested information is the result of performing the follow-on command. Claims 3, 10, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Pandey, and Attwater, and in further view of Chen (WO2024069754A1). Chen was applied in the previous Office Action. Regarding claims 3, 10, and 17, Pandey, as modified above, teaches the method, the medium, and the device claims of 1, 8, and 15 respectively. Pandey, as modified above, does not teach, however, Chen teaches the trained command recognition model having a Bidirectional Encoder Representations from Transformers (BERT) architecture. (Chen, Page4:” … The extraction process of the command part and the extraction process of the predicate and object may be performed using a trained model for language processing that has been trained in advance. As the trained model, for example, the extraction process may be performed using a trained model using spaCy and BERT (Bidirectional Encoder Representations from Transformers). A general training method may be used ....”). Chen is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pandey, as modified above, further in view of Chen to employ the command recognition model having a Bidirectional Encoder Representations from Transformers (BERT) architecture. Motivation to do so would provide services according to the user's intent (Chen, Page 1). Claims 4, 11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Pandey, and Attwater, and in further view of Joshi (US 20220385703 A1). Joshi was applied in the previous Office Action. Regarding claims 4, 11 and 18, Pandey, as modified above, teaches the method, the medium, and the device claims of 1, 8, and 15 respectively. Pandey, as modified above, does not teach, however, Joshi teaches the trained generative model having a Bidirectional and Auto-Regressive Transformers (BART) architecture. (Joshi, Par. 0055:” BART (bidirectional and auto-regressive transformer) is a pre-trained NLP/ML model that combines bidirectional and auto-regressive transformers. BART uses a standard transformer-based neural machine translation architecture, which includes a de-noising auto-encoder built with a sequence-to-sequence model that is applicable to a range of end tasks. For example, fine-tuned training of BART can be applied to achieve a variety of text generation, comprehension, abstractive dialogue, question answering, and summarization end tasks.”) Note: BART reads on a generative model. Joshi is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pandey, as modified above, further in view of Joshi to employ the trained generative model having a Bidirectional and Auto-Regressive Transformers (BART) architecture. Motivation to do so would improve the overall accuracy of the particular result/determination (Joshi, Par. 0074). Claims 5, 6, 12, 13 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Pandey, Attwater, Joshi and in further view of Selvaraj (US 20240321447 A1), Saleh (US20240185065A1). Selvaraj and Saleh were applied in the previous Office Action. Regarding claims 5, 12 and 20, Pandey, as modified above, teaches the method, the medium and the device claims of 1, 8 and 15 respectively. Pandey, as modified above, further teaches obtaining annotated MPC data; (Pandey, Col. 15, ll. 7-17:” According to some examples, the agenda data 218 can be structured using simple annotations, created using one or more templates, created using a meeting agenda generation tool, and the like. For example, a user may select a template and supply information about the meeting to create a meeting agenda for a meeting. In some configurations, meeting agenda items can be listed as bullet items, numerated items, and/or labeled using some other format such that the agenda items for the meeting can be determined. Generally, any format may be utilized that is understood by functionality accessed by the meeting summary service.”, and Col. 21, ll. 10-12:” In some examples, the meeting summary service 120 tags [annotated] the transcript such that the identified action items are associated with the relevant portions of the transcript.”) [claim 20 only] the annotated MPC data including data associated with at least one of an MPC recording, an event timeline, or a combination thereof; (Pandey, Col. 15, ll. 45-61:” According to some configurations, the meeting summarizer manager 202 can be utilized to generate a meeting record 208 for the meeting. In some examples, the meeting record 208 is stored as meeting data 152 in the data store 126. In some examples, a meeting record 208 can include an audio and/or video recording of the meeting, a transcript of the meeting, content presented or discussed during the meeting, follow-up items, and the like. In contrast to just creating a recording of the meeting, the meeting record 208 may associate different parts of the recording with the different agenda items. According to some examples, the meeting summarizer manager 202 utilizes one or more transcription services 118A to generate transcript data 130 for the meeting. In this way, a user may locate the relevant content more quickly. Similarly, a user may access the meeting record to see the presentation material, any follow-up items, and the like.”) Pandey, as modified above, does not teach, however, Selvaraj teaches processing the annotated MPC data by cleaning and normalizing the annotated MPC data to generate processed annotated MPC data; (Selvaraj, Par. 0113:” Training of a machine learning classifier typically comprises:”, and Par. 0114:” a) Obtaining a dataset along with associated classification labels (e.g. outcomes);”, and Par. 0115:” b) Pre-processing the data, which includes data quality techniques/data cleaning to remove any label noise or bad data and preparing the data so it is ready to be utilised for training and validation;”). identifying a training dataset and a testing dataset from the processed annotated MPC data; (Selvaraj, Par. 0118:” e) Splitting the dataset into a training dataset and a validation [testing] dataset and/or a test dataset;”). training the generative model using the training dataset; (Selvaraj, Par. 0119:” f) Training the model by using a machine learning algorithm (including using neural network and deep learning algorithm) on the training dataset; typically, during the training process, many models are produced by adjusting and tuning the model configurations in order to optimise the performance of model according to an accuracy metric;”). evaluating the trained generative model using the testing dataset based on predetermined metrics, (Selvaraj, Par. 0119:” f) Training the model by using a machine learning algorithm (including using neural network and deep learning algorithm) on the training dataset; typically, during the training process, many models are produced by adjusting and tuning the model configurations in order to optimise the performance of model according to an accuracy metric;”) Note: as recited many models are produced and optimized the performance according to an accuracy metric which implies evaluation of the model were conducted as well. generating the trained generative model based on whether the trained generative model evaluation is acceptable. (Selvaraj, Par. 0119:” f) Training the model by using a machine learning algorithm (including using neural network and deep learning algorithm) on the training dataset; typically, during the training process, many models are produced by adjusting and tuning the model configurations in order to optimise the performance of model according to an accuracy metric; and”, and Par. 0120:” g) Choosing the best “final” model based on the model's performance on the validation dataset; the model is then applied to the “unseen” test dataset to validate the performance of the final machine learning model.”) Note: the final model implies the evaluation indicated an acceptable model. Selvaraj is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pandey, as modified above, further in view of Selvaraj to process the annotated MPC data by cleaning and normalizing the annotated MPC data to generate processed annotated MPC data; identifying a training dataset and a testing dataset from the processed annotated MPC data; training the generative model using the training dataset; evaluating the generative model using the testing dataset based on predetermined metrics, generating the trained generative model based on whether the generative model evaluation is acceptable. Motivation to do so would allow the system to continuously adapt and update as new information is received (Selvaraj, Par. 0074). Pandey, as modified above, does not teach, however, Saleh teaches the predetermined metrics selected from a group comprising: ROUGE-1, ROUGE-2, ROUGE-L, and Perplexity evaluation metrics; (Saleh, Par. 0037:” … Specifically, the system can do so by evaluating a pre-training objective function that measures a difference between the prediction and the one or more selected segments with respect to, e.g., perplexity or ROUGE metric. … In particular, the system computes the gradient of the pre-training objective function with respect to the parameters of the text summarization neural network.”, and Par. 0035:” ... the system first evaluates, e.g., by computing a ROUGE1-FI score, an importance measure of the segment which characterizes a relative importance ....”). Saleh is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pandey, as modified above, further in view of Saleh to employ the predetermined metrics selected from the group comprising: ROUGE-1, ROUGE-2, ROUGE-L, and Perplexity evaluation metrics. Motivation to do so would improve the effectiveness of the training engine (Saleh, Par. 0023). Regarding claims 6, and 13, Pandey, as modified above, teaches the method, and the medium claims of 1, and 8 respectively. Pandey, as modified above, further teaches wherein the annotated MPC data includes data associated with at least one of an MPC recording, an event timeline, or a combination thereof. (Pandey, Col. 15, ll. 45-61:” According to some configurations, the meeting summarizer manager 202 can be utilized to generate a meeting record 208 for the meeting. In some examples, the meeting record 208 is stored as meeting data 152 in the data store 126. In some examples, a meeting record 208 can include an audio and/or video recording of the meeting, a transcript of the meeting, content presented or discussed during the meeting, follow-up items, and the like. In contrast to just creating a recording of the meeting, the meeting record 208 may associate different parts of the recording with the different agenda items. According to some examples, the meeting summarizer manager 202 utilizes one or more transcription services 118A to generate transcript data 130 for the meeting. In this way, a user may locate the relevant content more quickly. Similarly, a user may access the meeting record to see the presentation material, any follow-up items, and the like.”) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Gupta et al. (US20220207489A1) teaches in Par. 0012:” … program based in input received at a third-party meeting service. … Methods can also include causing the task information to be displayed by the third-party meeting service during the event in response to obtaining the task information and monitoring, during the event, an API endpoint of the third-party meeting service, for user input related to the task information. In response to receiving the user input the methods can include analyzing the user input to identify a request to create a new task in the task management service and in response to receiving the request to create the new task, sending a task creation request to the task management service, where the task creation request including the user input and an event identification generated by the third-party meeting service for the event.” Examiner's Note: Examiner has cited particular columns and line numbers and/or paragraph numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DARIOUSH AGAHI whose telephone number is (408)918-7689. The examiner can normally be reached Monday - Thursday and alternate Fridays, 7:30-4:30 PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached on 571-272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. DARIOUSH AGAHI, P.E. Primary Examiner /DARIOUSH AGAHI/Primary Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Jun 20, 2023
Application Filed
May 08, 2025
Non-Final Rejection — §103
Aug 11, 2025
Response Filed
Sep 25, 2025
Final Rejection — §103
Dec 29, 2025
Request for Continued Examination
Jan 25, 2026
Response after Non-Final Action
Feb 22, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596890
SYSTEMS AND METHODS FOR CROSS-LINGUAL TRANSFER LEARNING
2y 5m to grant Granted Apr 07, 2026
Patent 12596876
SYSTEMS AND METHODS FOR IMPROVING TEXTUAL DESCRIPTIONS USING LARGE LANGUAGE MODELS
2y 5m to grant Granted Apr 07, 2026
Patent 12591743
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM FOR EXTRACTING A NAMED ENTITY FROM A DOCUMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12586586
SPEECH RECOGNITION WITH SELECTIVE USE OF DYNAMIC LANGUAGE MODELS
2y 5m to grant Granted Mar 24, 2026
Patent 12579448
TECHNIQUES FOR POSITIVE ENTITY AWARE AUGMENTATION USING TWO-STAGE AUGMENTATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+29.0%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 166 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month