Prosecution Insights
Last updated: April 19, 2026
Application No. 18/112,157

SYSTEMS AND METHODS FOR USING AUDIO WATERMARKS TO JOIN MEETINGS

Final Rejection §102§103
Filed
Feb 21, 2023
Examiner
PULLIAS, JESSE SCOTT
Art Unit
2655
Tech Center
2600 — Communications
Assignee
Cisco Technology Inc.
OA Round
4 (Final)
83%
Grant Probability
Favorable
5-6
OA Rounds
2y 8m
To Grant
96%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
873 granted / 1052 resolved
+21.0% vs TC avg
Moderate +13% lift
Without
With
+13.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
47 currently pending
Career history
1099
Total Applications
across all art units

Statute-Specific Performance

§101
15.0%
-25.0% vs TC avg
§103
50.4%
+10.4% vs TC avg
§102
19.7%
-20.3% vs TC avg
§112
4.9%
-35.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1052 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 08/08/25 has been entered. This office action is in response to correspondence 08/08/25 regarding application 18/112,157, in which claims 1-6, 8-9, 12-13, and 15-20 were amended. Claims 1-20 are pending in the application and have been considered. Response to Arguments The amended specification overcomes the objection to the specification, and so it is withdrawn. Applicant’s request on page 9 that the examiner contact Applicant’s attorney prior to issuing a new Office Action is acknowledged. However, since the Office Action contains new grounds for rejection based on newly cited prior art, it is believed that the interest of compact and efficient prosecution is better served by allowing Applicant’s attorney an opportunity to review the written new grounds for rejection prior to any additional interviews. The examiner agrees with Applicant on pages 10-11 that Weiner does not anticipate the claims as amended. In particular, the examiner agrees with Applicant that Weiner does not disclose detecting, by a first device of the user, the audio watermark. However, the examiner respectfully disagrees with Applicant that Weiner does not disclose connecting a second device of a user to a meeting session; in response to connecting the second device of the user to the meeting session, generating an audio watermark for the user of the meeting session, wherein the audio watermark comprises an identification of the user; … decoding, by the first device of the user, the audio watermark; and identifying determining, by the first device, the identification of the user and the identification of the meeting session in response to decoding the audio watermark. Weiner discloses connecting a second device of a user to a meeting session because in Weiner, endpoint 115, a second device of a user, communicates, i.e. connects, with conference server 120 over a communication network, [0031], which is hosting a video conference with multiple endpoints, i.e. a meeting session, [0028], for which a user having mobile device 130, i.e. a first device, is sharing the endpoint, [0066]. Weiner further discloses in response to connecting the second device of the user to the meeting session, generating an audio watermark for the user of the meeting session because in Weiner, marking component inserts media cues, e.g. an audio watermark, into a particular data stream, [0047], for user of shared endpoint 115 and user of device 130 during a conference, [0047-0050]; since the video is streamed to endpoint 115 in response to its connection to the conference, the audio watermark present in the stream is considered generated “in response to connecting”. Weiner further discloses wherein the audio watermark comprises an identification of the user because in Weiner, Mark Decoding Unit 325 deciphers or decodes the received video clip, into which the audio watermark has been inserted by Marking Unit 315, [0047], to obtain identification information, e.g. a service/user ID, user identification information, which is used by device 130 to serve as identification in communications with conference server 120, [0048]. Weiner further discloses decoding, by a first device of the user, the audio watermark because in Weiner, application 140 running on device 130, a first device of the user, contains Mark Decoding Unit 325, which deciphers or decodes the received video clip, into which the audio watermark has been inserted by Marking Unit 315, [0047]; and Weiner further discloses determining, by the first device, the identification of the user and an identification of the meeting session in response to decoding the audio watermark because in Weiner, Mark Decoding Unit 325 deciphers or decodes the received video clip, into which the audio watermark has been inserted by Marking Unit 315, [0047], to obtain identification information, e.g. a service/user ID, user identification information, [0048], the service ID identifying the conference, i.e. meeting session, [0032]. Applicant argues on page 11 that Weiner’s device 130 obtains the identification information from the captured media clip rather than generating the audio watermark with the identification information of the user of device 130. In response, the examiner agrees with Applicant that this is how the Fig. 2 example of Weiner described at [0044] works. However, Weiner appears to disclose multiple alternative ways of identifying the user, and [0039] of Weiner specifically discloses “An application 140 running on device 130 can extract, from the media clip 135 of conference data stream 125, the service/user identification information, the user identification information, and/or service identification information 145 to be used to identify the user of device 130 and/or device 130 itself” (emphasis added). The media clip 135 may be a video into which marking component 315 inserts an audio cue ([0047]). The examiner considers this “extracting” of the service/user identification information to be different from merely obtaining the identification information by correlating the media clip. Rather, the media cue, i.e. watermark, itself appears to contain the identification information. See e.g. [0025]: “Information received from the device, such as identification information from a decoded media cue or identification information from correlating a media clip with a particular data stream, can be used to associate a device with a service or services on the server.” The above evidence is sufficient to establish that the audio cue itself, i.e. watermark inserted into the media clip contains “identification information” which is “to be used to identify the user of device 130”. In other words, Weiner does not merely decode a generic audio cue which is then used to obtain identification information, but instead extracts or decodes specific identification from the audio cue itself, which is sufficient to conclude that Weiner is considered to fairly disclose, or at the very least suggest, “… generating an audio watermark for the user of the meeting session, wherein the audio watermark comprises an identification of the user…”. The examiner notes that while the claim language requires that the audio watermark “comprises” an identification “of the user”, there is no requirement in the claims that the identification comprised in the audio watermark is specific to the user. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 8-11, and 15-18 are rejected under 35 U.S.C. 102(a)(1) as being unpatentable over Wiener et al. (US 20130166742) in view of Bell (US 20140117073). Consider claim 1, Wiener discloses a system, comprising: one or more processors (processor, [0015]); and one or more computer-readable non-transitory storage media coupled to the one or more processors and comprising instructions that, when executed by the one or more processors (non-transitory computer-readable medium containing instructions executed by the processor, [0015]), cause the system to perform operations comprising: connecting a second device of a user to a meeting session (endpoint 115, a second device of a user, communicates, i.e. connects, with conference server 120 over a communication network, [0031], which is hosting a video conference with multiple endpoints, i.e. a meeting session, [0028], for which a user having mobile device 130, i.e. a first device, is sharing the endpoint, [0066]); in response to connecting the second device of the user to the meeting session, generating an audio watermark for the user of the meeting session (marking component inserts media cues, e.g. an audio watermark, into a particular data stream, [0047], for user of shared endpoint 115 and user of device 130 during a conference, [0047-0050]; since the video is streamed to endpoint 115 in response to its connection to the conference, the audio watermark present in the stream is considered generated “in response to connecting”), wherein the audio watermark comprises an identification of the user (Mark Decoding Unit 325 deciphers or decodes the received video clip, into which the audio watermark has been inserted by Marking Unit 315, [0047], to obtain identification information, e.g. a service/user ID, user identification information, which is used by device 130 to serve as identification in communications with conference server 120, [0048]); decoding, by a first device of the user, the audio watermark (application 140 running on device 130, a first device of the user, contains Mark Decoding Unit 325, which deciphers or decodes the received video clip, into which the audio watermark has been inserted by Marking Unit 315, [0047],); and determining, by the first device, the identification of the user and an identification of the meeting session in response to decoding the audio watermark (Mark Decoding Unit 325 deciphers or decodes the received video clip, into which the audio watermark has been inserted by Marking Unit 315, [0047], to obtain identification information, e.g. a service/user ID, user identification information, [0048], the service ID identifying the conference, i.e. meeting session, [0032]). Wiener does not specifically mention detecting, by a first device of the user, the audio watermark. Bell discloses detecting, by a first device of the user, the audio watermark (audio output from speaker has an audio watermarking encoded with the audio stream 520 that can be detected by an audio capture device of a client computing device, [0054], such as an attendee’s mobile phone, [0030]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Wiener by detecting, by a first device of the user, the audio watermark in order to allow a person to join an online meeting that is in progress without having to have a meeting invitation issued to them, as suggested by Bell ([0002]), predictably also avoiding a person having to spend time searching for a meeting invitation previously provided to them, as suggested by Bell ([0002]). The references cited are analogous art in the same field of meeting assistance. Consider claim 8, Wiener discloses a method, comprising: connecting a second device of a user to a meeting session (endpoint 115, a second device of a user, communicates, i.e. connects, with conference server 120 over a communication network, [0031], which is hosting a video conference with multiple endpoints, i.e. a meeting session, [0028], for which a user having mobile device 130, i.e. a first device, is sharing the endpoint, [0066]); in response to connecting the second device of the user to the meeting session, generating an audio watermark for the user of the meeting session (marking component inserts media cues, e.g. an audio watermark, into a particular data stream, [0047], for user of shared endpoint 115 and user of device 130 during a conference, [0047-0050]; since the video is streamed to endpoint 115 in response to its connection to the conference, the audio watermark present in the stream is considered generated “in response to connecting”), wherein the audio watermark comprises an identification of the user (Mark Decoding Unit 325 deciphers or decodes the received video clip, into which the audio watermark has been inserted by Marking Unit 315, [0047], to obtain identification information, e.g. a service/user ID, user identification information, which is used by device 130 to serve as identification in communications with conference server 120, [0048]); decoding, by a first device of the user, the audio watermark (application 140 running on device 130, a first device of the user, contains Mark Decoding Unit 325, which deciphers or decodes the received video clip, into which the audio watermark has been inserted by Marking Unit 315, [0047],); and determining, by the first device, the identification of the user and an identification of the meeting session in response to decoding the audio watermark (Mark Decoding Unit 325 deciphers or decodes the received video clip, into which the audio watermark has been inserted by Marking Unit 315, [0047], to obtain identification information, e.g. a service/user ID, user identification information, [0048], the service ID identifying the conference, i.e. meeting session, [0032]). Wiener does not specifically mention detecting, by a first device of the user, the audio watermark. Bell discloses detecting, by a first device of the user, the audio watermark (audio output from speaker has an audio watermarking encoded with the audio stream 520 that can be detected by an audio capture device of a client computing device, [0054], such as an attendee’s mobile phone, [0030]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Wiener by detecting, by a first device of the user, the audio watermark for reasons similar to those for claim 1. Consider claim 15, Wiener discloses one or more computer-readable non-transitory storage media embodying instructions that, when executed by a processor (non-transitory computer-readable medium containing instructions executed by the processor, [0015]), cause the processor to perform operations comprising: connecting a second device of a user to a meeting session (endpoint 115, a second device of a user, communicates, i.e. connects, with conference server 120 over a communication network, [0031], which is hosting a video conference with multiple endpoints, i.e. a meeting session, [0028], for which a user having mobile device 130, i.e. a first device, is sharing the endpoint, [0066]); in response to connecting the second device of the user to the meeting session, generating an audio watermark for the user of the meeting session (marking component inserts media cues, e.g. an audio watermark, into a particular data stream, [0047], for user of shared endpoint 115 and user of device 130 during a conference, [0047-0050]; since the video is streamed to endpoint 115 in response to its connection to the conference, the audio watermark present in the stream is considered generated “in response to connecting”), wherein the audio watermark comprises an identification of the user (Mark Decoding Unit 325 deciphers or decodes the received video clip, into which the audio watermark has been inserted by Marking Unit 315, [0047], to obtain identification information, e.g. a service/user ID, user identification information, which is used by device 130 to serve as identification in communications with conference server 120, [0048]); decoding, by a first device of the user, the audio watermark (application 140 running on device 130, a first device of the user, contains Mark Decoding Unit 325, which deciphers or decodes the received video clip, into which the audio watermark has been inserted by Marking Unit 315, [0047],); and determining, by the first device, the identification of the user and an identification of the meeting session in response to decoding the audio watermark (Mark Decoding Unit 325 deciphers or decodes the received video clip, into which the audio watermark has been inserted by Marking Unit 315, [0047], to obtain identification information, e.g. a service/user ID, user identification information, [0048], the service ID identifying the conference, i.e. meeting session, [0032]). Wiener does not specifically mention detecting, by a first device of the user, the audio watermark. Bell discloses detecting, by a first device of the user, the audio watermark (audio output from speaker has an audio watermarking encoded with the audio stream 520 that can be detected by an audio capture device of a client computing device, [0054], such as an attendee’s mobile phone, [0030]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Wiener by detecting, by a first device of the user, the audio watermark for reasons similar to those for claim 1. Consider claim 2, Wiener discloses the audio watermark comprises: the identification of the meeting session (identification from the decoded media cue, [0025], associates the user with particular services on a particular server, such as a conference, i.e. meeting session, [0050], [0064]). Consider claim 3, Wiener discloses detecting the audio watermark comprises listening, via a microphone of the first device, for the audio watermark produced by a speaker of the second device (device 130 records an audio clip 135 from audio being output from speakers of endpoint 115, i.e. listens, [0043], which is transmitted to conference server 120 for detecting the media cue, i.e. watermark, [0040]). Consider claim 4, Wiener discloses moving, in response to identifying the user and the meeting session, the meeting session from the first device to the second device (identification information, which includes user identity and session information, is used to transfer a user currently using a device such as a cellular telephone, i.e. first device, to a shared an endpoint, i.e. second device, without disconnecting, [0066], [0025], [0050], [0064], via server 120, also a “first device”). Consider claim 9, Wiener discloses the audio watermark comprises: the identification of the meeting session (identification from the decoded media cue, [0025], associates the user with particular services on a particular server, such as a conference, i.e. meeting session, [0050], [0064]). Consider claim 10, Wiener discloses detecting the audio watermark comprises listening, via a microphone of the first device, for the audio watermark produced by a speaker of the second device (device 130 records an audio clip 135 from audio being output from speakers of endpoint 115, i.e. listens, [0043], which is transmitted to conference server 120 for detecting the media cue, i.e. watermark, [0040]). Consider claim 11, Wiener discloses the first device is a local device and the second device is a mobile device (identification information, which includes user identity and session information, is used to transfer a user sharing an endpoint, i.e. a local first device, to a cellular telephone, i.e. mobile device without disconnecting, [0066], [0025], [0050], [0064]; server 120, is connected to endpoints and mobile device via LAN and is therefore also “a local first device”). Consider claim 16, Wiener discloses the audio watermark comprises: the identification of the meeting session (identification from the decoded media cue, [0025], associates the user with particular services on a particular server, such as a conference, i.e. meeting session, [0050], [0064]). Consider claim 17, Wiener discloses detecting the audio watermark comprises listening, via a microphone of the first device, for the audio watermark produced by a speaker of the second device (device 130 records an audio clip 135 from audio being output from speakers of endpoint 115, i.e. listens, [0043], which is transmitted to conference server 120 for detecting the media cue, i.e. watermark, [0040]). Consider claim 18, Wiener discloses moving, in response to identifying the user and the meeting session, the meeting session from a first device to a second device (identification information, which includes user identity and session information, is used to transfer a user currently using a device such as a cellular telephone, i.e. first device, to a shared an endpoint, i.e. second device, without disconnecting, [0066], [0025], [0050], [0064], via server 120, also a “first device”). Claims 5, 12, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Wiener et al. (US 20130166742) in view of Bell (US 20140117073), in further view of Raanani et al. (US 20190057698). Consider claim 5, Wiener discloses determining the identification of the user and the meeting session (Mark Decoding Unit 325 deciphers or decodes the received video clip, into which the audio watermark has been inserted by Marking Unit 315, [0047], to obtain identification information, e.g. a service/user ID, user identification information, [0048], the service ID identifying the conference, i.e. meeting session, [0032]). Wiener and Bell do not specifically initiating activation of a virtual assistant in response to identifying the user and the meeting session; and instructing the virtual assistant to perform an action associated with interacting with the user. Raanani discloses initiating activation of a virtual assistant in response to identifying a trigger during meeting session (during a real-time call, in-call virtual assistant monitors, identifies a trigger, [0029]); and instructing the virtual assistant to perform an action associated with interacting with the user (a specified task in response to the trigger, [0029]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Wiener by initiating activation of a virtual assistant, as in Raanani, in response to identifying the user and the meeting session as in Wiener; and instructing the virtual assistant to perform an action associated with interacting with the user as in Raanani in order to help users guide conversations with other user, as suggested by Raanani ([0016]) . Doing so would have led to predictable results of increasing the probability of positive outcome for the call, as suggested by Raanani ([0016]). The references cited are analogous art in the same field of meeting assistance. Consider claim 12, Wiener discloses determining the identification of the user and the meeting session (Mark Decoding Unit 325 deciphers or decodes the received video clip, into which the audio watermark has been inserted by Marking Unit 315, [0047], to obtain identification information, e.g. a service/user ID, user identification information, [0048], the service ID identifying the conference, i.e. meeting session, [0032]).Wiener and Bell do not specifically initiating activation of a virtual assistant in response to identifying the user and the meeting session; and instructing the virtual assistant to perform an action associated with interacting with the user. Raanani discloses initiating activation of a virtual assistant in response to identifying a trigger during meeting session (during a real-time call, in-call virtual assistant monitors, identifies a trigger, [0029]); and instructing the virtual assistant to perform an action associated with interacting with the user (a specified task in response to the trigger, [0029]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Wiener and Bell by initiating activation of a virtual assistant, as in Raanani, in response to identifying the user and the meeting session as in Wiener; and instructing the virtual assistant to perform an action associated with interacting with the user as in Raanani for reasons similar to those for claim 5. Consider claim 19, Wiener discloses determining the identification of the user and the meeting session (Mark Decoding Unit 325 deciphers or decodes the received video clip, into which the audio watermark has been inserted by Marking Unit 315, [0047], to obtain identification information, e.g. a service/user ID, user identification information, [0048], the service ID identifying the conference, i.e. meeting session, [0032]).Wiener and Bell do not specifically initiating activation of a virtual assistant in response to identifying the user and the meeting session; and instructing the virtual assistant to perform an action associated with interacting with the user. Raanani discloses initiating activation of a virtual assistant in response to identifying a trigger during meeting session (during a real-time call, in-call virtual assistant monitors, identifies a trigger, [0029]); and instructing the virtual assistant to perform an action associated with interacting with the user (a specified task in response to the trigger, [0029]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Wiener and Bell by initiating activation of a virtual assistant, as in Raanani, in response to identifying the user and the meeting session as in Wiener; and instructing the virtual assistant to perform an action associated with interacting with the user as in Raanani for reasons similar to those for claim 5. Claims 6, 7, 13, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Wiener et al. (US 20130166742) in view of Bell (US 20140117073), in further view of Holmes et al. (US 20170357915). Consider claim 6, Wiener and Bell do not, but Holmes discloses: communicating a notification to the user (Fig 19H element 1972 “Your meeting is ending soon”, [0556]); receiving an input from the user (user taps GUI element 1972A or 1972B, Fig 19H, [0556]); and determining whether to move the meeting session from a first device to a second device based on the input from the user (if the user taps 1972A, transferring the call to the user’s mobile phone, [0557]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Wiener and Bell by communicating a notification to a user; receiving an input from the user; and determining whether to move the meeting session from a first device to a second device based on the input from the user in order to reduce the tedious and cognitively burdensome actions required by the user to manipulate a presentation, as suggested by Holmes ([0004], [0005]), predictably reducing energy waste, as suggested by Holmes ([0004], [0005]). The references cited are analogous art in the same field of meeting assistance. Consider claim 7, Wiener, Bell, and Holmes do not specifically mention interpreting the input from the user using one of the following: speech recognition; or facial recognition. However, Holmes elsewhere discloses interpreting an(other) input from the user using one of the following: speech recognition; or facial recognition (voice recognition, [0110], used to recognize a voice command, [0653]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Wiener, Bell, and Holmes further by accepting a voice command for GUI elements 1972A or 1972B, Fig 19H, [0556], thereby interpreting the input from the user using speech recognition for reasons similar to those for claim 6. Consider claim 13, Wiener and Bell do not, but Holmes discloses: communicating a notification to the user (Fig 19H element 1972 “Your meeting is ending soon”, [0556]); receiving an input from the user (user taps GUI element 1972A or 1972B, Fig 19H, [0556]); and determining whether to move the meeting session from the first device to the second device based on the input from the user (if the user taps 1972A, transferring the call to the user’s mobile phone, [0557]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Wiener and Bell by communicating a notification to a user; receiving an input from the user; and determining whether to move the meeting session from a first device to a second device based on the input from the user for reasons similar to those for claim 6. Consider claim 14, Wiener, Bell, and Holmes do not specifically mention interpreting the input from the user using one of the following: speech recognition; or facial recognition. However, Holmes elsewhere discloses interpreting an input from the user using one of the following: speech recognition; or facial recognition (voice recognition, [0110], used to recognize a voice command, [0653]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Wiener, Bell, and Holmes by accepting a voice command for GUI elements 1972A or 1972B, Fig 19H, [0556], thereby interpreting the input from the user using speech recognition for reasons similar to those for claim 6. Consider claim 20, Wiener and Bell not, but Holmes discloses: communicating a notification to the user (Fig 19H element 1972 “Your meeting is ending soon”, [0556]); receiving an input from the user (user taps GUI element 1972A or 1972B, Fig 19H, [0556]); and determining whether to move the meeting session from a first device to a second device based on the input from the user (if the user taps 1972A, transferring the call to the user’s mobile phone, [0557]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Wiener and Bell by communicating a notification to a user; receiving an input from the user; and determining whether to move the meeting session from a first device to a second device based on the input from the user for reasons similar to those for claim 6. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US 20200098379 Tai discloses audio watermark encoding/decoding Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Andrew Flanders can be reached on 571/272-7516. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Jesse S Pullias/ Primary Examiner, Art Unit 2655 09/04/25
Read full office action

Prosecution Timeline

Feb 21, 2023
Application Filed
Jan 13, 2025
Non-Final Rejection — §102, §103
Apr 16, 2025
Response Filed
Apr 16, 2025
Applicant Interview (Telephonic)
Apr 16, 2025
Examiner Interview Summary
May 07, 2025
Final Rejection — §102, §103
Aug 08, 2025
Examiner Interview Summary
Aug 08, 2025
Applicant Interview (Telephonic)
Aug 08, 2025
Request for Continued Examination
Aug 12, 2025
Response after Non-Final Action
Sep 04, 2025
Non-Final Rejection — §102, §103
Dec 08, 2025
Response Filed
Dec 08, 2025
Applicant Interview (Telephonic)
Dec 08, 2025
Examiner Interview Summary
Dec 19, 2025
Final Rejection — §102, §103
Mar 17, 2026
Examiner Interview Summary
Mar 17, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596885
Automatically Labeling Items using a Machine-Trained Language Model
2y 5m to grant Granted Apr 07, 2026
Patent 12573378
SPEECH TENDENCY CLASSIFICATION
2y 5m to grant Granted Mar 10, 2026
Patent 12572740
MULTI-LANGUAGE DOCUMENT FIELD EXTRACTION
2y 5m to grant Granted Mar 10, 2026
Patent 12566929
COMBINING DATA SELECTION AND REWARD FUNCTIONS FOR TUNING LARGE LANGUAGE MODELS USING REINFORCEMENT LEARNING
2y 5m to grant Granted Mar 03, 2026
Patent 12536389
TRANSLATION SYSTEM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
83%
Grant Probability
96%
With Interview (+13.0%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 1052 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month