Prosecution Insights
Last updated: April 19, 2026
Application No. 18/315,027

METHOD FOR MULTI-CHANNEL AUDIO SYNCHRONIZATION FOR TASK AUTOMATION

Final Rejection §103
Filed
May 10, 2023
Examiner
PULLIAS, JESSE SCOTT
Art Unit
2655
Tech Center
2600 — Communications
Assignee
Nlx Inc.
OA Round
6 (Final)
83%
Grant Probability
Favorable
7-8
OA Rounds
2y 8m
To Grant
96%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
873 granted / 1052 resolved
+21.0% vs TC avg
Moderate +13% lift
Without
With
+13.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
47 currently pending
Career history
1099
Total Applications
across all art units

Statute-Specific Performance

§101
15.0%
-25.0% vs TC avg
§103
50.4%
+10.4% vs TC avg
§102
19.7%
-20.3% vs TC avg
§112
4.9%
-35.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1052 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This office action is in response to correspondence 02/02/26 regarding application 18/315,027, in which claims 8-14 were cancelled. Claims 1-7 and 15-20 are pending in the application and have been considered. Response to Arguments Claims 8-14 were cancelled, so the rejections of these claims are moot. Applicant’s arguments on pages 7-9 regarding the 35 U.S.C. 103 rejections of claims 1-7 and 15-20 based on Callaghan, Baldwin, Johnson, and Dutta have been considered and are not persuasive. In particular, Applicant argues on page 8 that Baldwin does not disclose or suggest “sending signal to cause the audio channel to remain synchronized with the non-audio channel” as recited in independent claim 1. According to Applicant, Baldwin merely describes establishing and synchronizing device listeners for each of multiple "devices in the environment" ([0027]) where the "various devices may have different internal clocks" ([0044]) such that the signals output from the device listeners are aligned (see [0047]). In response, the examiner agrees with Applicant that Baldwin describes establishing and synchronizing device listeners for each of multiple "devices in the environment" where the "various devices may have different internal clocks" such that the signals output from the device listeners are aligned, but respectfully disagrees with Applicant that Baldwin does not disclose or suggest “sending signal to cause the audio channel to remain synchronized with the non-audio channel”. As noted by the Office Action 10/08/25 on page 6, the key to the synchronization in Baldwin is publishing information related to the internal clock or timing of the associated device(s). Indeed, according to Baldwin at [0044], “synchronizing the device listeners may include each of the respective device listeners publishing information relating to the internal clock or timing of the associated device.” As Baldwin further explains at [0047], “…each device that receives an input from the user may have an internal clock or timing mechanism. In an operation 250, each device may therefore determine when the input was received from a local perspective, and notify the voice-click module that the input was received…. using the device listener signals synchronized as described in reference to operation 220, an operation 260 may include aligning the signals for the device interactions and the utterance.” In other words, each device listener publishes information, i.e. sends a signal, relating to the internal clock of timing of the associated device. When a device listener detects an input, it notifies voice-click module that the input was received and of the local time it was received. Voice-click module then adjusts the local times using the published internal clock timings to align the received inputs. This is how voice-click module knows when a non-voice channel input such as selection of text using a mouse or keyboard was received in absolute time relative to a voice input such as “Is this available on Amazon.com?”, see [0046]. Otherwise, the timestamp received for the non-voice events would be meaningless. The publication of internal clock or timing information is fairly considered to disclose, or at the very least suggest the claimed “… send a signal to cause the audio channel to remain synchronized with the non-audio channel…” for at least the reason that it is used to align subsequently received non-audio events with audio channel events in absolute time, i.e. remain synchronized, as seen above. Applicant’s arguments on page 9 regarding claim 15 are similar to those addressed above regarding claim 1, and are not persuasive for similar reasons. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, and 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Callaghan et al. (US 7406657) in view of Baldwin et al. (US 20150348544). Consider claim 1, Callaghan discloses an apparatus, comprising: a processor (a processor is inherent for the multithreaded processing of command events, Col 6 lines 52-58); and a memory operably coupled to the processor, the memory storing instructions (a parser contained within the multimodal browser builds the representation from lines of code, Col 6 lines 22-28, for which memory coupled to the processor is inherent) to cause the processor to: receive an indication of a start of a session, the session associated with a user and having an audio channel that is synchronized with a non-audio channel (user selects a field to fill within the form, which initiates audio presentation of the form field synchronized with display of the form field, Col 4 lines 30-48, and allows the user to fill in the form via verb interaction, Col 2 lines 20-23, by allowing navigation of the form and filling in of fields in response to prompts, Col 3 lines 47-53); repeatedly determine, after the receiving, whether a prompt on the non-audio channel has been received from the user (when the last field of the form is reached, a pause is generated and the system determines whether the user uses a tactile input method to either submit, cancel, or reset the form, Col 5 lines 3-18 before the pause expires, and if not, loops back to the top of the form); and for each determination that the prompt on the non-audio channel has not been received from the user, send a signal to cause the audio channel to remain synchronized with the non-audio channel (the audio progression loops back to the top of the form, i.e. remains synchronized with the displayed form, until the user uses a tactile input method to either submit, cancel, or reset the form, Col 5 lines 3-18, allowing the visual component of the form to remain synchronized with the audio presentation, Col 5 lines 34-40, using objects in the audio queue which request a repositioning of the audio queue (including the ability to loop back and repeat part of the audio queue), Figure 6 and Col 5-6 lines 59-2). Callaghan does not specifically mention a session for at least one of a non-audio channel defined by a text messaging mobile application or a voice-powered automated assistant; and send a signal to cause the audio channel to remain synchronized with the non-audio channel without causing audible output on the audio channel during a time period between sending that signal and a next determination that the prompt on the non-audio channel has not been received from the user. Baldwin discloses a session for at least one of a non-audio channel defined by a text messaging mobile application or a voice-powered automated assistant (conversational language processor receives voice input, performs speech recognition, and provides automated user assistance with e.g. navigation via output device, Fig 1 elements 105a, 110, 120, 150, and 180, [0040]); and send a signal to cause the audio channel to remain synchronized with the non-audio channel without causing audible output on the audio channel during a time period between sending that signal and a next determination that the prompt on the non-audio channel has not been received from the user (voice-click module coupled to input device continually monitors non-voice input device to detect occurrences of at least one non-voice device interaction, i.e. repeatedly determines that a prompt on a non-audio channel has not been received from the user, [0027], upon establishing a device listener for then non-voice input device, [0043], and publishes information related to the internal clock of timing of the device to cause it to remain synchronized with voice input device, [0043-0044], to align voice and non-voice inputs, [0047]; this involves continually listening for input and determining timing information of non-voice inputs received without causing audible output). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Callaghan by including a session for at least one of a non-audio channel defined by a text messaging mobile application or a voice-powered automated assistant; and send a signal to cause the audio channel to remain synchronized with the non-audio channel without causing audible output on the audio channel during a time period between sending that signal and a next determination that the prompt on the non-audio channel has not been received from the user in order to enable cooperative processing of utterances and non-voice device interactions, as suggested by Baldwin ([0011]), predictably making the human to machine interface less cumbersome to use, as suggested by Baldwin ([0003]). The cited references are analogous art in the field of multi-modal input. Consider claim 15, Callaghan discloses a method, comprising: receiving a representation of a request from a compute device associated with a user to complete a task (user selects a field to fill within the form using multi-modal browser, for which execution on a computer device is inherent, which initiates audio presentation of the form field synchronized with display of the form field, Col 4 lines 30-48); causing an audio channel associated with the user to synchronize with at least one non-audio channel associated with the user (audio presentation of the form field is synchronized with display of the form field, Col 4 lines 30-48); sending a first signal to cause a first audible output to be output a first time by the audio channel (audio prompting in association with form element 204, Col 4-5 lines 42-5); repeatedly determining whether a prompt on the at least one non-audio channel has been received from the user (determining whether a user uses a verbal command to either SUBMIT, CANCEL, or RESET the form, or uses a tactile input method, i.e. a prompt on the non-audio channel, to accomplish the same thing, Col 5 lines 13-18); for each determination that the prompt on the non-audio channel has not been received from the user, sending a second signal to cause the audio channel to remain open (the audio progression loops back to the top of the form, i.e. remains synchronized with the displayed form, until the user uses a tactile input method to either submit, cancel, or reset the form, Col 5 lines 3-18, allowing the visual component of the form to remain synchronized with the audio presentation, Col 5 lines 34-40, using objects in the audio queue which request a repositioning of the audio queue (including the ability to loop back and repeat part of the audio queue), without outputting audio, Figure 6 and Col 5-6 lines 59-2); and in response to a determination that the prompt on the at least one non-audio channel has been received from the user (determining whether the user has selected field 104 via a touchscreen, Col 4 lines 30-41, or selected, “SUBMIT” via tactile input method, such as the touchscreen, Col 5 lines 12-18): selecting a second audible output based on a determination that the prompt is in accordance with the task (upon receiving “SUBMIT” via tactile input method, such as the touchscreen, Col 5 lines 12-18, the audio service thread progresses to the construct end element form the FORM element, and outputs message M from the audio element queue “This message is after the form”, Col 7 lines 9-13, Fig. 6) selecting a third audible output based on a determination that the prompt is not in accordance with the task (if the user has selected field 104 via a touchscreen, Col 4 lines 30-41, prompt H: “What is the customer’s problem” is output, Fig 6, by the audio element queue) and sending a third signal to cause one of the second audible output or the third audible output to be output on the audio channel (the thread doing the audio progression on behalf of the browser requests that progress be stopped, re-positions, and then restarted, to output the various prompts, Col 6-7 lines 29-13, Fig. 6). Callaghan does not specifically mention sending a second signal to cause the audio channel to remain open and without causing audible output on the audio channel during a time period between sending that signal and a next determination that the prompt on the non-audio channel has not been received from the user. Baldwin discloses sending a second signal to cause the audio channel to remain open and without causing audible output on the audio channel during a time period between sending that signal and a next determination that the prompt on the non-audio channel has not been received from the user (voice-click module coupled to input device continually monitors non-voice input device to detect occurrences of at least one non-voice device interaction, i.e. repeatedly determines that a prompt on a non-audio channel has not been received from the user, [0027], upon establishing a device listener for then non-voice input device, [0043], and publishes information related to the internal clock of timing of the device to cause it to remain synchronized with voice input device, which is caused to continue listening, i.e. remain open, [0043-0044], to align voice and non-voice inputs, [0047]; this involves continually listening for input and determining timing information of non-voice inputs received without causing audible output). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Callaghan by sending a second signal to cause the audio channel to remain open and without causing audible output on the audio channel during a time period between sending that signal and a next determination that the prompt on the non-audio channel has not been received from the user for reasons similar to those for claim 1. Consider claim 2, Callaghan discloses the signal to cause the audio channel to remain synchronized with the non-audio channel further causes an inaudible output on the audio channel to the user in response to each determination that the prompt on the non-audio channel has not been received from the user (the audio progression loops back to the top of the form, i.e. remains synchronized with the displayed form, until the user uses a tactile input method to either submit, cancel, or reset the form, Col 5 lines 3-18, allowing the visual component of the form to remain synchronized with the audio presentation, Col 5 lines 34-40, and the electrical waveform output to the speaker is considered inaudible prior to being transduced into an acoustic pressure wave, for example, inherent in producing what is ultimately audio that is audible to the user from the WAV file, Col 6 lines 34-42). Consider claim 16, Callaghan discloses the second signal further causes an inaudible output to be output on the audio channel to the user in response to each determination that the prompt on the non-audio channel has not been received from the user (when the last field of the form is reached, a pause is generated and the system determines whether the user uses a tactile input method to either submit, cancel, or reset the form, Col 5 lines 3-18 before the pause expires, and if not, loops back to the top of the form, which causes audio output of “What is the customer name?”, Fig. 6, steps K and D, Col 5-6 lines 59-4, the electrical waveform output to the speaker is considered inaudible prior to being transduced into an acoustic pressure wave, for example, inherent in producing what is ultimately audio that is audible to the user from the WAV file, Col 6 lines 34-42). Consider claim 17, Callaghan discloses the prompt is a first prompt, the method further comprising: repeatedly determining whether a second prompt on the at least one non-audio channel has been received from the user (when the last field of the form is reached, a pause is generated and the system determines whether the user uses a tactile input method to either submit, cancel, or reset the form, Col 5 lines 3-18 before the pause expires, and if not, loops back to the top of the form, and upon reaching the last field, this solicitation of input for the user to submit the form is “a second prompt”, noting the claim does not require the content of the prompts to differ); sending a fourth signal to cause the inaudible output on the audio channel to the user in response to each determination that the second prompt on the at least one non-audio channel has not been received from the user (the audio progression loops back to the top of the form, i.e. remains synchronized with the displayed form, until the user uses a tactile input method to either submit, cancel, or reset the form, Col 5 lines 3-18, allowing the visual component of the form to remain synchronized with the audio presentation, Col 5 lines 34-40, and the electrical waveform output to the speaker is considered inaudible prior to being transduced into an acoustic pressure wave, for example, inherent in producing what is ultimately audio that is audible to the user from the WAV file, Col 6 lines 34-42); and in response to the determination that the second prompt on the at least one non-audio channel has been received from the user: selecting a fourth audible output based on an activity by the user on the at least one non-audio channel (if the user has selected field 104 via a touchscreen, Col 4 lines 30-41, outputting the next prompt in the audio element queue, i.e. “a fourth audible output”), and sending a fourth signal to cause the fourth audible output to be output on the audio channel (the thread doing the audio progression on behalf of the browser requests that progress be stopped, re-positions, and then restarted, to output the various prompts, the prompt after the third considered a “fourth audible output”, Col 6-7 lines 29-13, Fig. 6). Claims 3, 4, 6, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Callaghan et al. (US 7406657) in view of Baldwin et al. (US 20150348544), in further view of Johnson et al. (US 20030162561). Consider claim 3, Callaghan discloses the memory further stores instructions to cause the processor to: in response to a determination that the prompt on the non-audio channel has been received from the user, select an audible output based on an activity by the user on the non-audio channel (if the user has selected field 104 via a touchscreen, Col 4 lines 30-41, prompt H: “What is the customer’s problem” is output, Fig 6, by the audio element queue); select, at a first time, a first prompt from a plurality of prompts (a first WAV file is selected and output, Col 6 lines 34-42, at a first time, e.g. at “D” prompting “What is the customer name”, Fig 6); and select, at a second time after the first time, a second prompt from the plurality of prompt, the selecting the audible output being based on the second prompt (a second WAV file is selected and output, Col 6 lines 34-42, at a second time, e.g. at “H” prompting “What is the customer problem?”, Fig 6), . Callaghan and Baldwin do not specifically mention select a first language and a second language. Johnson discloses selecting a first language and a second language (voiceXML is the base language of the multimodal application, and the CMMT indicates a text mode containing the text in HTML, [0044] their “selections” implicit). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Callaghan and Baldwin by selecting a first language and a second language in order to allow concurrent input of information by a user in differing modes through differing applications, as suggested by Johnson, [0009], leading to predictable results of reducing the cumbersome need to manually switch modes during a session, as suggested by Johnson ([0006]). The cited references are analogous art in the field of multi-modal input. Consider claim 4, Callaghan and Baldwin do not, but Johnson discloses: the audio channel is associated with a first device type from a plurality of device types, the non-audio channel is associated with a second device type from the plurality of device types, and the first device type includes a phone, a smart speaker, an earphone or an Internet of Things (IoT) device, the second device type includes a phone, a smart speaker, an earphone or an IoT device, the first device type being different from the second device type (different modalities including voice and keyboard or touchscreen input on different devices, [0004], [0005], such as a cellular telephone, [0023], and speaker located on another device such as a PDA, [0024]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Callaghan and Baldwin such that the audio channel is associated with a first device type from a plurality of device types, the non-audio channel is associated with a second device type from the plurality of device types, and the first device type includes a phone, a smart speaker, an earphone or an Internet of Things (IoT) device, the second device type includes a phone, a smart speaker, an earphone or an IoT device, the first device type being different from the second device type for reasons similar to those for claim 3. Consider claim 6, Callaghan discloses: during a first time period, the non-audio channel is associated with and the selecting is performed with respect to a first digital non-audio channel (determining that the user has selected field 104 via a touchscreen, Col 4 lines 30-41), and during a second time period after the first time period, the non-audio channel is associated with and the selecting is performed with respect to a second digital non-audio channel different from the first digital non-audio channel (after up to a 10 second delay, determining the user has selected, “SUBMIT” via tactile input method, such as the keyboard, Col 5 lines 12-18, Fig. 6). Consider claim 19, Callaghan and Baldwin do not, but Johnson discloses: the audio channel is associated with a first device type from a plurality of device types, the non-audio channel is associated with a second device type from the plurality of device types, and the first device type includes a phone, a smart speaker, an earphone or an Internet of Things (IoT) device, the second device type includes a phone, a smart speaker, an earphone or an IoT device, the first device type being different from the second device type (different modalities including voice and keyboard or touchscreen input on different devices, [0004], [0005], such as a cellular telephone, [0023], and speaker located on another device such as a PDA, [0024]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Callaghan and Baldwin such that the audio channel is associated with a first device type from a plurality of device types, the non-audio channel is associated with a second device type from the plurality of device types, and the first device type includes a phone, a smart speaker, an earphone or an Internet of Things (IoT) device, the second device type includes a phone, a smart speaker, an earphone or an IoT device, the first device type being different from the second device type for reasons similar to those for claim 3. Claims 5, 7, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Callaghan et al. (US 7406657) in view of Baldwin et al. (US 20150348544), in further view of Dutta et al. (US 20200177730). Consider claim 5, Callaghan discloses memory further stores instructions to cause the processor to: in response to a determination that the prompt on the non-audio channel has been received from the user (determining whether the user has selected field 104 via a touchscreen, Col 4 lines 30-41, or selected, “SUBMIT” via tactile input method, such as the touchscreen, Col 5 lines 12-18), select an audible output based on an activity by the user on the non-audio channel, and receive a signal from a device for the non-audio channel, the selecting the audible output being based on the signal from the device for the non-audio channel (if the user has selected field 104 via a touchscreen, Col 4 lines 30-41, prompt H: “What is the customer’s problem” is output, Fig 6, by the audio element queue, the thread doing the audio progression on behalf of the browser requests that progress be stopped, re-positions, and then restarted, to output the various prompts, Col 6-7 lines 29-13, Fig. 6). Callaghan and Baldwin do not specifically mention receive, via an application programming interface (API), a signal. Dutta discloses receiving, via an application programming interface (API), a signal (an API call, [0067]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Callaghan and Baldwin by receiving, via an application programming interface (API), a signal in order to avoid disjointed communication, as suggested by Dutta ([0008]), predictably resulting in improved quality of experience for a customer, as suggested by Dutta ([0007]). The cited references are analogous art in the field of multi-modal input. Consider claim 7, Callaghan discloses the repeatedly determining and the sending the signal is repeated until an end of the session (when the last field of the form is reached, a pause is generated and the system determines whether the user uses a tactile input method to either submit, cancel, or reset the form, Col 5 lines 3-18 before the pause expires, and if not, loops back to the top of the form, Fig 6 steps B through K), the method further comprising: after the start of the session and before the end of the session, performing at least one of: determine that a prompt on the audio channel received from the user includes an indication that the user would like to discontinue the non-audio channel (determining the user has spoken “CANCEL” command, Col 5 lines 17-19) or determine that the prompt on the non-audio channel includes an indication that the user would like to discontinue the non-audio channel (determining the user has selected the “CANCEL” command via a tactile input method, Col 5 lines 17-19); terminate the non-audio channel of the session, in response to the indication that the user would like to discontinue the non-audio channel (the cycle continues until the user invokes a command to end, Col 7 liners 9-11, Step L, Fig. 6, which ends the audio/visual presentation, Col 6 lines 5-9); Callaghan and Baldwin do not specifically mention send, after the terminating, a signal to connect a communication device of the user with a communication device of a live agent. Dutta discloses sending a signal to connect a communication device of the user with a communication device of a live agent (connect to voice agent button 400, Fig. 4, [0062]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Callaghan and Baldwin by sending, after the terminating as in Callaghan, a signal to connect a communication device of the user with a communication device of a live agent as disclosed by Dutta for reasons similar to those for claim 5. Consider claim 18, Callaghan does not, but Baldwin discloses the compute device is a mobile device (mobile terminal such as mobile phone, Col 3 lines 38-47). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Callaghan such that the compute device is a mobile device in order to increase convenience, as suggested by Baldwin (Col 1 lines 27-29). Callaghan and Baldwin do not specifically mention transmitting a hyperlink to the mobile device via at least one of a text message or an email, the causing of the audio channel associated with the user to synchronize with the at least one non-audio channel associated with the user performed automatically in response to the user selecting the hyperlink. Dutta discloses transmitting a hyperlink to the mobile device via at least one of a text message or an email, the causing of the audio channel associated with the user to synchronize with the at least one non-audio channel associated with the user performed automatically in response to the user selecting the hyperlink (the apparatus provides a message including a URL to the caller via email sent to a different device, which selection of triggers a linked web session, [0022]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Callaghan and Baldwin such that the compute device is a mobile device and by transmitting a hyperlink to the mobile device via at least one of a text message or an email, the causing of the audio channel associated with the user to synchronize with the at least one non-audio channel associated with the user performed automatically in response to the user selecting the hyperlink for reasons similar to those for claim 5. Consider claim 20, Callaghan and Baldwin do not, but Dutta discloses the compute device is a first compute device, the method further comprising: causing a connection to a second compute device associated with at least one of a live chat or a live agent in response to an indication from the user to connect with at least one of the live chat or the live agent (connect to voice agent button 400, Fig. 4, [0062], which causes a connection from mobile phone of the user to the agent’s computing device). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Callaghan and Baldwin such that the compute device is a first compute device, and by causing a connection to a second compute device associated with at least one of a live chat or a live agent in response to an indication from the user to connect with at least one of the live chat or the live agent for reasons similar to those for claim 5. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jesse Pullias whose telephone number is 571/270-5135. The examiner can normally be reached on M-F 8:00 AM - 4:30 PM. The examiner’s fax number is 571/270-6135. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Andrew Flanders can be reached on 571/272-7516. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Jesse S Pullias/ Primary Examiner, Art Unit 2655 02/26/26
Read full office action

Prosecution Timeline

May 10, 2023
Application Filed
Aug 08, 2024
Non-Final Rejection — §103
Oct 17, 2024
Response Filed
Nov 06, 2024
Final Rejection — §103
Jan 23, 2025
Response after Non-Final Action
Feb 10, 2025
Request for Continued Examination
Feb 13, 2025
Response after Non-Final Action
Mar 11, 2025
Non-Final Rejection — §103
Jun 02, 2025
Response Filed
Jun 16, 2025
Final Rejection — §103
Sep 05, 2025
Response after Non-Final Action
Sep 17, 2025
Request for Continued Examination
Oct 01, 2025
Response after Non-Final Action
Oct 06, 2025
Non-Final Rejection — §103
Feb 02, 2026
Response Filed
Feb 26, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596885
Automatically Labeling Items using a Machine-Trained Language Model
2y 5m to grant Granted Apr 07, 2026
Patent 12573378
SPEECH TENDENCY CLASSIFICATION
2y 5m to grant Granted Mar 10, 2026
Patent 12572740
MULTI-LANGUAGE DOCUMENT FIELD EXTRACTION
2y 5m to grant Granted Mar 10, 2026
Patent 12566929
COMBINING DATA SELECTION AND REWARD FUNCTIONS FOR TUNING LARGE LANGUAGE MODELS USING REINFORCEMENT LEARNING
2y 5m to grant Granted Mar 03, 2026
Patent 12536389
TRANSLATION SYSTEM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
83%
Grant Probability
96%
With Interview (+13.0%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 1052 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month