Prosecution Insights
Last updated: April 19, 2026
Application No. 18/646,336

COLLECTING EMG SPEECH SIGNAL DATA

Non-Final OA §101§102§103
Filed
Apr 25, 2024
Examiner
HUTCHESON, CODY DOUGLAS
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Snap Inc.
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
15 granted / 24 resolved
+0.5% vs TC avg
Strong +47% interview lift
Without
With
+47.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
34 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
33.9%
-6.1% vs TC avg
§103
40.9%
+0.9% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 24 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on 06/26/2024, 10/10/2024, 12/05/2024, and 09/18/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Objections 1. Claim 13 is objected to because of the following informalities: In claim 13, “receiving input to initiate an additional recording of EMG data” is recited. The Examiner recommends instead reciting “receiving an additional input” to match the term used later in the same claim (“in response to receiving the additional input…”) and to differentiate this additional input from “the input” recited in independent claim 1. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 2. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claims 1, 19, and 20, “A method”, “A system”, and “A non-transitory computer-readable storage medium” are recited, which are each directed to one of the four statutory categories of invention (process, machine, article of manufacture) (Step 1: YES). However, the claims limitations, under their broadest reasonable interpretation, recite mental processes which fall into the category of abstract idea (Step 2A Prong 1: YES). The following limitations, under their broadest reasonable interpretation, recite mental processes: presenting a target word for electromyograph (EMG) data collection…: a person presents a word to a user on a piece of paper receiving input to initiate recording of EMG data: a person obtains an input (e.g. a user tells the person they are ready) determining whether the set of EMG signals collected over the threshold period of time corresponds to the target word: a person observes EMG signals being collected and makes a determination as to if the signal represents the target word presenting feedback …based on whether the set of EMG signals collected over the threshold period of time corresponds to the target word: a person presents feedback to the user based on the determination (e.g. a written note to the user with ‘yes’ or ‘no’ written down to indicate if it did or did not correspond to the target word) Claims 1, 19, and 20 does not contain any additional elements which integrate the judicial exception into a practical application (Step 2A Prong 2: NO). The additional limitations of: “a graphical user interface (GUI)” (claims 1, 19, 20), “an EMG communication device” (claims 1, 19, and 20), “A system comprising: at least one processor; and at least one memory component having instructions stored thereon that, when executed by the at least one processor, cause the at least one processor to perform operations comprising” (claim 19), and “A non-transitory computer-readable storage medium having stored thereon instructions that, when executed by at least one processor, cause the at least one processor to perform operations comprising” (claim 20) are recited at a high level of generality and amount to mere instructions to implement the judicial exception using a generic computer. Further, the limitation of “in response to receiving the input, collecting…a set of EMG signals generated based on an individual user of the EMG communication device over a threshold period of time” is insignificant extra solution activity as the act of receiving EMG signals falls under mere data gathering, which does not integrate the judicial exception into a practical application. Even when viewed in combination, the additional elements do not integrate the judicial exception into a practical application as they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to an abstract idea. Claims 1, 19, and 20 do not amount to significantly more than the judicial exception (Step 2B: NO). As discussed above, the additional limitations amount to mere instructions to implement the judicial exception using a generic computer or insignificant extra solution activity. Even when viewed in combination, the additional elements do not amount to significantly more than the judicial exception as they do not provide an inventive concept. Furthermore, the receiving EMG signal step amounts to receiving data in a generic manner, which has been determined as being well-understood, routine, and conventional in the art (MPEP 2106.05(d)(II)). Therefore, claims 1, 19, and 20 are not patent eligible. Regarding claims 2-18, “The method” is recited, which is directed to one of the four statutory categories of invention (process) (Step 1: YES). However, the claims limitations, under their broadest reasonable interpretation, recite further mental processes which fall into the category of abstract idea (Step 2A Prong 1: YES). The following limitations, under their broadest reasonable interpretation, recite mental processes: Claim 2: Claim 2 recites “wherein the threshold period of time comprises less than two seconds”, which further details the insignificant extra solution activity (mere data gathering) introduced in claim 1. Claim 3: wherein the target word is selected by a user from a list of target words presented in the GUI: a person writes down a list of words on a piece of paper, and lets the user select a word Claim 4: wherein the target word is selected randomly…from a list of target words: a person randomly picks a word from a written list Claim 4 contains the additional limitation “by the EMG communication device”, which amounts to mere instructions to implement the judicial exception using a generic computer. Claim 5: wherein the feedback is presented during the threshold period of time or after the threshold period of time elapses: a person shows the user the feedback on a piece of paper after the collecting is finished and the user has made their determination Claim 6: wherein the feedback comprises a graphical element having a visual attribute representing whether the set of EMG signals collected over the threshold period of time corresponds to the target word: a person writes down feedback as a visual attribute (e.g. draws a box and writes inside the box the results of the determination) Claim 7: wherein the graphical element is presented with a first visual attribute in response to determining that the set of EMG signals collected over the threshold period of time corresponds to the target word: a person writes down a first visual attribute if the target word corresponds to the signal (e.g. draws a first box and writes inside the box the result). Claim 8: wherein the graphical element is presented with a second visual attribute in response to determining that the set of EMG signals collected over a threshold period of time fails to correspond to the target word: a person writes down a second visual attribute if the target word does not correspond (e.g. draws a second box and writes inside the box the result) Claim 9: wherein the visual attribute comprises at least one of a predetermined animation of color: a person writes down the feedback in a particular color pen Claim 10: wherein the feedback comprises a score representing a number of times that different sets of EMG signals are determined to correspond to respective target words or fail to correspond to the respective target words: a person writes down a score to show how many successes and failures (e.g. writes down a percentage of success trials) Claim 11: receiving additional input to initiate additional recording of EMG data: a person gets a second input to initiate (e.g. user tells person they are ready) determining, whether the additional set of EMG signals collected over the threshold period of time corresponds to an additional target word: a person observes EMG signals being collected and makes a determination as to if the signal represents the target word presenting additional feedback…based on whether the additional set of EMG signals collected over the additional threshold period of time corresponds to the additional target word: a person presents further feedback to the user based on the determination (e.g. a written note to the user with ‘yes’ or ‘no’ written down to indicate if it did or did not correspond to the target word) Claim 11 recites “the EMG communication device” and “the GUI”, which amount to mere instructions to implement the judicial exception using a generic computer. Claim 11 recites “in response to receiving the additional input, collecting, by the EMG communication device, an additional set of EMG signals generated based on the individual user of the EMG communication device over an additional threshold period of time”, which amounts to further insignificant extra solution activity in the form of mere data gathering. Claim 12: Claim 12 recites “wherein the collection of the set of EMG signals is initiated within 50 milliseconds of receiving the input”, which further details the insignificant extra solution activity (mere data gathering) introduced in claim 1. Claim 13: receiving input to initiate additional recording of EMG data: a person gets a second input to initiate (e.g. user tells person they are ready) determining, whether the additional set of EMG signals collected over the threshold period of time corresponds to an additional target word: a person observes EMG signals being collected and makes a determination as to if the signal represents the target word presenting additional feedback…based on whether the additional set of EMG signals collected over the additional threshold period of time corresponds to the target word: a person presents further feedback to the user based on the determination (e.g. a written note to the user with ‘yes’ or ‘no’ written down to indicate if it did or did not correspond to the target word) Claim 13 recites “the EMG communication device” and “the GUI”, which amount to mere instructions to implement the judicial exception using a generic computer. Claim 11 recites “in response to receiving the additional input, collecting, by the EMG communication device, an additional set of EMG signals generated based on the individual user of the EMG communication device over an additional threshold period of time”, which amounts to further insignificant extra solution activity in the form of mere data gathering. Claim 14: wherein the feedback represents historical recording sessions indicating how many times different sets of EMG signals were determined to correspond to the target word: a person writes down information about historical sessions (e.g. writes down on paper how many times a particular target word had a corresponding EMG signal collected) Claim 15: presenting a value indicating how many EMG data recording sessions were performed: a person writes down a running count of how many sessions have been performed Claim 16: determining that the set of EMG signals correspond to the target word for a first time: a person notes down that a set of EMG signals correspond to the target word for the first time Claim 16 further recites “presenting an animation”, which amounts to mere instructions to implement the judicial exception using a generic computer. Claim 17: determining that the set of EMG signals correspond to the target word for a threshold number of consecutive times: a person notes down that a set of EMG signals correspond to the target word for a consecutive number of times (e.g. if it corresponds three times in a row, make a note on the paper) Claim 17 further recites “presenting an animation”, which amounts to mere instructions to implement the judicial exception using a generic computer. Claim 18: updating a collection of EMG training data using the set of EMG signals; …detect EMG signals produced by the individual user based on the updated collection of EMG training data; and …determine whether an additional set of EMG signals collected over an additional threshold period of time corresponds to the target word or corresponds to other target words: a person uses updated EMG signals data to further learn how to make the determinations, and then observes and make a further determination for an additional set of signals as to whether the signals correspond to the other target words. Claim 18 recites “retraining a machine learning model…” and “using the retrained machine learning model to determine…”, which amounts to mere instructions to implement the judicial exception using a generic computer. Claims 2-18 do not contain any additional elements which integrate the judicial exception into a practical application (Step 2A Prong 2: NO). As discussed above, the only additional limitations amount to mere instructions to implement the judicial exception using a generic computer and insignificant extra solution activity. Even when viewed in combination, the additional elements do not integrate the judicial exception into a practical application as they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to an abstract idea. Claims 2-18 do not amount to significantly more than the judicial exception (Step 2B: NO). As discussed above, the additional limitations amount to mere instructions to implement the judicial exception using a generic computer or insignificant extra solution activity. Even when viewed in combination, the additional elements do not amount to significantly more than the judicial exception as they do not provide an inventive concept. Furthermore, the receiving EMG signal steps amount to receiving data in a generic manner, which has been determined as being well-understood, routine, and conventional in the art (MPEP 2106.05(d)(II)). Therefore, claims 2-18 are not patent eligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 3. Claims 1, 4-8, 10-11, 13, 15, and 19-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Cha et al. (NPL Deep-Learning-based real-time silent speech recognition using facial electromyogram recorded around eyes for hands-free interfacing in a virtual reality environment, hereinafter Cha). Regarding claim 1, Cha discloses A method comprising: presenting a target word for electromyograph (EMG) data collection on a graphical user interface (GUI) (Fig. 6, “Instruction” word presented on GUI for user to silent speak; Table 2 shows list of target words used); receiving input to initiate recording of EMG data (Fig. 6, “Trying to read a word silently right when the timer has been over”); in response to receiving the input, collecting, by an EMG communication device, a set of EMG signals generated based on a individual user of the EMG communication device over a threshold period of time (Fig. 6, see plot panel “EMG pattern silently spoken word” collected from ~t=22 to t=26; collected via EMG device (see Fig. 6, photo of participant); pg. 3, 1st para. “In this study, as aforementioned, we develop an fEMG-based SSR system with electrodes only around the eyes where VR headset is contacted to…”; pg. 4, section 4, 1st para. “All the signal analyses were conducted using MATLAB 2019a (MathWorks, Inc., Natick, MA, the USA) on a desktop PC (Windows 10, 64 GB-RAM, Intel Core i7 8700 CPU 3.20 GHZ).”); determining whether the set of EMG signals collected over the threshold period of time corresponds to the target word (Fig. 4 “Three neural network models based on bLSTM cells were used to classify the fEMG patterns for the silently spoken words…”; pg. 7, section 5 “In this study, the SSR performance was evaluated in terms of a six-class classification accuracy defined as the number of correct trails divided by the number of total trials…”; classification accuracy calculation determines if the predicted word matches the target word); and presenting feedback in the GUI based on whether the set of EMG signals collected over the threshold period of time corresponds to the target word (Fig. 6 “Recognition Results” shown: “Answer: Previous, Predict: Previous”, as well as classification accuracy “1”). Regarding claim 4, Cha discloses wherein the target word is selected randomly by the EMG communication device from a list of target words (pg. 9, 1st para. “Figure 6 shows a snapshot of an online experiment captured when the first silently spoken word was being classified…A demonstration video of the online SSR can be found…where a user was silently speaking randomly provided words 60 times…”). Regarding claim 5, Cha discloses wherein the feedback is presented during the threshold period of time or after the threshold period of time elapses (Fig. 6, feedback “Recognition Results” shown after EMG signal has been collected; pg. 9, 1st para. “Moreover, the classification results as well as the cumulative accuracy are presented in the panels titled “Recognition Results” and “Classification Accuracy”, respectively, both are which are presented in the bottom-left corner of the GUI program.”). Regarding claim 6, Cha discloses wherein the feedback comprises a graphical element having a visual attribute representing whether the set of EMG signals collected over the threshold period of time corresponds to the target word (Fig. 6, “Recognition Results” box displays an answer and a predicted result which together indicate whether or not the collected EMG signal corresponds to the target word). Regarding claim 7, Cha discloses wherein the graphical element is presented with a first visual attribute in response to determining that the set of EMG signals collected over the threshold period of time corresponds to the target word (first visual attribute: Fig. 6, “Recognition Results”, where “Answer” and “Predict” are same word). Regarding claim 8, Cha discloses wherein the graphical element is presented with a second visual attribute in response to determining that the set of EMG signals collected over the threshold period of time fails to correspond to the target word (second visual attribute: Fig. 6, “Recognition Results”, where “Answer” and “Predict” are not same word). Regarding claim 10, Cha discloses wherein the feedback comprises a score representing a number of times that different sets of EMG signals are determined to correspond to respective words or fail to correspond to the respective target words (Fig. 6, “Classification Accuracy”; pg. 9, 1st para. “Moreover,…the cumulative accuracy are presented in the panels titled…”Classification Accuracy”, respectively…”; a cumulative accuracy is a score reflects how many classifications were correct vs. incorrect). Regarding claim 11, Cha discloses receiving additional input to initiate additional recording of EMG data (Cha discloses repetition of silent speech classification for 60 trials (pg. 9, 1st para. “A demonstration video of the online SSR can be found at…where a user was silent speaking randomly provided words 60 times. The online classification accuracy was 96.67% for a total of 60 trails, in the video clip.”), and thus discloses receiving additional input to initiate additional recording (Fig. 6, using an additional timer for an additional word), see also claim mapping for claim 1); in response to receiving the additional input, collecting, by the EMG communication device, an additional set of EMG signals generated based on the individual user of the EMG communication device over an additional threshold period of time (Cha discloses repetition of silent speech classification for 60 trials (pg. 9, 1st para. “Figure 6 shows a snapshot of an online experiment captured when the first silently spoken word was being classified… A demonstration video of the online SSR can be found at…where a user was silent speaking randomly provided words 60 times. The online classification accuracy was 96.67% for a total of 60 trails, in the video clip.”), and thus discloses collecting additional set of EMG signals over an additional threshold period of time), see also claim mapping for claim 1); determining whether the additional set of EMG signals collected over the threshold period of time corresponds to an additional target word (Cha discloses repetition of silent speech classification for 60 trials (pg. 9, 1st para. “A demonstration video of the online SSR can be found at…where a user was silent speaking randomly provided words 60 times. The online classification accuracy was 96.67% for a total of 60 trails, in the video clip.”), and thus discloses determining whether the additional set corresponds to the additional target word, see also claim mapping for claim 1); presenting additional feedback in the GUI based on whether the additional set of EMG signals collected over the additional threshold period of time corresponds to the additional target word (Cha discloses repetition of silent speech classification for 60 trials (pg. 9, 1st para. “A demonstration video of the online SSR can be found at…where a user was silent speaking randomly provided words 60 times. The online classification accuracy was 96.67% for a total of 60 trails, in the video clip.”), and thus discloses presenting the additional feedback on the GUI, see also claim mapping for claim 1). Regarding claim 13, Cha discloses receiving input to initiate additional recording of EMG data (Cha discloses repetition of silent speech classification for 60 trials (pg. 9, 1st para. “A demonstration video of the online SSR can be found at…where a user was silent speaking randomly provided words 60 times. The online classification accuracy was 96.67% for a total of 60 trails, in the video clip.”), and thus discloses receiving additional input to initiate additional recording (Fig. 6, using an additional timer for an additional word), see also claim mapping for claim 1); in response to receiving the additional input, collecting, by the EMG communication device, an additional set of EMG signals generated based on the individual user of the EMG communication device over an additional threshold period of time (Cha discloses repetition of silent speech classification for 60 trials (pg. 9, 1st para. “Figure 6 shows a snapshot of an online experiment captured when the first silently spoken word was being classified… A demonstration video of the online SSR can be found at…where a user was silent speaking randomly provided words 60 times. The online classification accuracy was 96.67% for a total of 60 trails, in the video clip.”), and thus discloses collecting additional set of EMG signals over an additional threshold period of time), see also claim mapping for claim 1); determining whether the additional set of EMG signals collected over the threshold period of time corresponds to the target word (Cha discloses repetition of silent speech classification for 60 trials (pg. 9, 1st para. “A demonstration video of the online SSR can be found at…where a user was silent speaking randomly provided words 60 times. The online classification accuracy was 96.67% for a total of 60 trails, in the video clip.”), and thus discloses determining whether the additional set corresponds to the target word, see also claim mapping for claim 1); presenting additional feedback in the GUI based on whether the additional set of EMG signals collected over the additional threshold period of time corresponds to the target word (Cha discloses repetition of silent speech classification for 60 trials (pg. 9, 1st para. “A demonstration video of the online SSR can be found at…where a user was silent speaking randomly provided words 60 times. The online classification accuracy was 96.67% for a total of 60 trails, in the video clip.”), and thus discloses presenting the additional feedback on the GUI, see also claim mapping for claim 1). Regarding claim 15, Cha discloses presenting a value indicating how many EMG data recording sessions were performed (see Fig. 6, “Experiment Sessions”, presents value “#10” indicating 10 sessions were performed). Regarding claim 19, claim 19 is a system claim with limitations similar to those recited in method claim 1, and is thus rejected under similar rationale. Additionally, Cha discloses A system comprising: at least one processor; and at least one memory component having instructions stored thereon that, when executed by the at least one processor, cause the at least one processor to perform operations comprising (Cha discloses that the EMG speech classification method is performed using a computer: pg. 4, section 4, 1st para. “All the signal analyses were conducted using MATLAB 2019a (MathWorks, Inc., Natick, MA, the USA) on a desktop PC (Windows 10, 64 GB-RAM, Intel Core i7 8700 CPU 3.20 GHZ).”, which inherently reads on a system comprising a processor and a memory). Regarding claim 20, claim 20 is a non-transitory computer-readable storage medium claim with limitations similar to those recited in method claim 1, and is thus rejected under similar rationale. Additionally, Cha discloses A non-transitory computer-readable storage medium having stored thereon instructions that, when executed by at least one processor, cause the at least one processor to perform operations comprising (Cha discloses that the EMG speech classification method is performed using a computer: pg. 4, section 4, 1st para. “All the signal analyses were conducted using MATLAB 2019a (MathWorks, Inc., Natick, MA, the USA) on a desktop PC (Windows 10, 64 GB-RAM, Intel Core i7 8700 CPU 3.20 GHZ).”, which inherently reads on a storage medium and the processor). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 4. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Cha in view of Nalborczyk et al. (NPL Can we decode phonetic features in inner speech using surface electromyography?, hereinafter Nalborczyk). Regarding claim 2, Cha does not specifically disclose wherein the threshold period of time comprises less than two seconds. Nalborczyk teaches wherein the threshold period of time comprises less than two seconds (pg. 8, section “EMG signal processing”, 2nd para. “The periods of interest in all the speech conditions consisted of the 1-second interval during which the participants either produced speech or listened to speech. It is possible that the nonword took less than 1 second to be produced, but since there was no way to track when production started and ended in the inner speech condition, the entire 1-second period was kept. Therefore, the overt speech condition was composed of 6 repetitions of each nonword, that is 6x20 trials of 1 second.”). Cha and Nalborczyk are considered to be analogous to the claimed invention as they both are in the same field of speech processing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Cha to incorporate the teachings of Nalborczyk in order to have the threshold period of time be less than two seconds. Doing so would be beneficial, as using an entire 1 second as a sample for each utterance would ensure that the whole utterance is captured (pg. 8, section “EMG signal processing”, 2nd para. “It is possible that the nonword took less than 1 second to be produced, but since there was no way to track when production started and ended in the inner speech condition, the entire 1-second period was kept”). 5. Claims 3, 9, and 14, and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Cha in view of Johnson (WO 2025/128961 A1). Regarding claim 3, Cha does not specifically disclose wherein the target word is selected by a user from a list of target words presented in the GUI. Johnson teaches wherein the target word is selected by a user from a list of target words presented in the GUI (Fig. 8 “The following Challenge Words are in this session. Select each one in turn to learn and practice them before you start reading…”; upon selection, user repeats a particular challenge word (i.e. 716(1) in Fig. 9A)). Cha and Johnson are considered to be analogous to the claimed invention as they both are in the same field of speech processing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Cha to incorporate the teachings of Johnson in order to specifically have the target word be selected by a user from a list of target words presented in the GUI. Doing so would be beneficial, as this would provide motivation to the user to interactively learn the most difficult words (para. 0110). Regarding claim 9, Cha does not specifically disclose wherein the visual attribute comprises at least one of a predetermined animation or color. Johnson teaches wherein the visual attribute comprises at least one of a predetermined animation or color (para. 0124 “Advantageously, the visual cue informs user 120 of the challenge word such that the user tried harder to pronounce it correctly. In certain embodiments, when the user reads the word correctly they receive a visual and or digital reward. For example, reader application 125 may control 110 to cause the word to sparkle off the page when the user reads the word correctly a certain number of times (e.g., ten)…”). Cha and Johnson are considered to be analogous to the claimed invention as they both are in the same field of speech processing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Cha to incorporate the teachings of Johnson in order to specifically have the visual attribute comprise at least one of a predetermined animation or color. Doing so would further motivate the user via a dopamine hit (para. 0124), providing encouragement for user to continue with the EMG speech signal data collection. Regarding claim 14, Cha discloses analyzing different sets of EMG signals [that] were determined to correspond to the target word (Cha, Fig. 4 “Three neural network models based on bLSTM cells were used to classify the fEMG patterns for the silently spoken words…”; pg. 7, section 5 “In this study, the SSR performance was evaluated in terms of a six-class classification accuracy defined as the number of correct trails divided by the number of total trials…”; classification accuracy calculation determines if the predicted word matches the target word) and providing feedback (Fig. 6 “Recognition Results” shown: “Answer: Previous, Predict: Previous”, as well as classification accuracy “1”). However, Cha does not specifically disclose wherein the feedback represents historical recording sessions indicating how many times different sets [of EMG signals] were determined to correspond to the target word. Johnson teaches wherein the feedback represents historical recording sessions indicating how many times user speech were determined to correspond to the target word (para. 0071 “Encountered 412 also includes a list of whole words 452 (e.g., the word as written in book 252) such as “bicycling”, “pedestal”, “reading”, etc. For each whole word 452, tracker 334 stores at least three counters: : a correct use count 454 indicative of a number of times user 120 has correctly read the whole word…”). Cha and Johnson are considered to be analogous to the claimed invention as they both are in the same field of speech processing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Cha to incorporate the teachings of Johnson in order to specifically have the feedback represent historical recording sessions indicating how many times different sets of EMG signals were determined to correspond to the target word. Doing so would be beneficial, as this would provide information regarding user proficiency for the target word, and would indicate words that the user should focus on if incorrectly spoken a certain amount of times (para. 0040). Regarding claim 16, Cha discloses determining that the set of EMG signals correspond to the target word for a first time (Cha, Fig. 4 “Three neural network models based on bLSTM cells were used to classify the fEMG patterns for the silently spoken words…”; pg. 7, section 5 “In this study, the SSR performance was evaluated in terms of a six-class classification accuracy defined as the number of correct trails divided by the number of total trials…”; classification accuracy calculation determines if the predicted word matches the target word). Cha does not specifically disclose presenting an animation in response to [determining that the set of EMG signals correspond to the target word for a first time]. Johnson teaches presenting an animation in response to a determination that a user speech corresponds to a target word for a first time (para. 0124 “Advantageously, the visual cue informs user 120 of the challenge word such that the user tried harder to pronounce it correctly. In certain embodiments, when the user reads the word correctly they receive a visual and or digital reward. For example, reader application 125 may control 110 to cause the word to sparkle off the page when the user reads the word correctly a certain number of times (e.g., ten)…”). Cha and Johnson are considered to be analogous to the claimed invention as they both are in the same field of speech processing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Cha to incorporate the teachings of Johnson in order to specifically present an animation in response to the determination that the set of EMG signals correspond to the target word for a first time. Doing so would further motivate the user via a dopamine hit (para. 0124), providing encouragement for user to continue with the EMG speech signal data collection. Regarding claim 17, Cha discloses determining that the set of EMG signals correspond to the target word…(Cha, Fig. 4 “Three neural network models based on bLSTM cells were used to classify the fEMG patterns for the silently spoken words…”; pg. 7, section 5 “In this study, the SSR performance was evaluated in terms of a six-class classification accuracy defined as the number of correct trails divided by the number of total trials…”; classification accuracy calculation determines if the predicted word matches the target word; ). Cha does not specifically disclose presenting an animation in response to [determining that the set of EMG signals correspond to a target word] for a threshold number of consecutive times. Johnson teaches presenting an animation in response to determining that a user’s speech corresponds to a target word for a threshold number of consecutive times (para. 0124 “Advantageously, the visual cue informs user 120 of the challenge word such that the user tried harder to pronounce it correctly. In certain embodiments, when the user reads the word correctly they receive a visual and or digital reward. For example, reader application 125 may control 110 to cause the word to sparkle off the page when the user reads the word correctly a certain number of times (e.g., ten)…”). Cha and Johnson are considered to be analogous to the claimed invention as they both are in the same field of speech processing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Cha to incorporate the teachings of Johnson in order to specifically present an animation in response to the determination that the set of EMG signals correspond to the target word for a threshold number of consecutive times. Doing so would further motivate the user via a dopamine hit (para. 0124), providing encouragement for user to continue with the EMG speech signal data collection. 6. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Cha in view of Solnik et al. (NPL Teager-Kaiser energy operator signal conditioning improves EMG onset detection, hereinafter Solnik). Regarding claim 12, Cha does not specifically disclose wherein the collection of the set of EMG signals is initiated within 50 milliseconds of receiving the input. Solnik teaches wherein the collection of the set of EMG signals is initiated within 50 milliseconds of receiving the input (pg. 2, section “EMG data processing”, 1st para. “Signals were analog filtered at 10-500 Hz (with first order filter at lower cutoff frequency and sixth order filter at higher cutoff frequency), amplified 2000x and sampled at 1kHz using a TeleMyo 900 telemetric hardware system…”; pg. 4, section “Threshold-based method”, 2nd para. “The estimated onset time t1 was identified as the first point when the smoothed signal exceeded the threshold T for more than 25 consecutive samples…”; onset time is used to determine when an EMG signal of interest has started; onset time is ~25ms after threshold is met (for an EMG signal at 1KHz, 25 samples corresponds to 25ms, which is within 50ms)). Cha and Solnik are considered to be analogous to the claimed invention as they both are in the same field of EMG signal collection. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Cha to incorporate the teachings of Solnik in order to have the collection of the set of EMG signals be initiated within 50 milliseconds of receiving the input. Using the taught method would improve the accuracy of EMG onset detection (pg. 9, 4th para.; Abstract). 7. Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Cha in view of Benster et al. (US 12,198,698 B1, hereinafter Benster). Regarding claim 18, Cha discloses initial training of a machine learning model for EMG speech detection (Cha, see Fig. 2, section (c) “Training & Classification” and section 4.2; neural networks shown in Fig. 4) and using the trained machine learning model to determine whether an additional set of EMG signals collected over an additional threshold period of time corresponds to the target word or corresponds to other target words (Fig. 4 “Three neural network models based on bLSTM cells were used to classify the fEMG patterns for the silently spoken words…”; pg. 7, section 5 “In this study, the SSR performance was evaluated in terms of a six-class classification accuracy defined as the number of correct trails divided by the number of total trials…”; classification accuracy calculation determines if the predicted word matches the target word). However, Cha does not specifically disclose updating a collection of EMG training data using the set of EMG signals; retraining a machine learning model to detect EMG signals produced by the individual user based on the updated collection of EMG training data; and using the retrained machine learning model [to determine whether an additional set of EMG signals collected over an additional threshold period of time corresponds to the target word or corresponds to other target words]. Benster teaches updating a collection of EMG training data using the set of EMG signals (Col. 61, Lines 64-65: “In some cases, the system may comprise at least recording additional silent speech data 1660…”; Col. 61 Lines 58-61 “In some cases, the speech articulator data may further comprise sEMG data, accelerometer data, additional imaging data, or a combination thereof.”); retraining a machine learning model to detect EMG signals produced by the individual user based on the updated collection of EMG training data (Col. 61 Lines 65-67 “In some cases, the system may comprise at least recording additional silent speech data 1660 to retrain or finetune the one or more machine learning models 1670 of the system…”); and using the retrained machine learning model (Col. 69, Lines 20-32 “In some cases, any method or system as described herein wherein the silent speech interface may be configured to integrate with a voice assistant. In some cases, the voice assistant may be configured as an AI powered voice assistant. In some cases, a method for silent speech may comprise interacting with conversational AI, wherein a user's recent dialogue context and an ambient sound captured by the microphone (including speech from others) influences the AI's responses. In some cases, a machine learning model for silent speech may comprise an AI engine, where the AI engine is configured to analyze recent dialogue context and an ambient sound captured by the microphone (including speech from others) to generate responses.”). Cha and Benster are considered to be analogous to the claimed invention as they both are in the same field of speech processing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Cha to incorporate the teachings of Benster in order to update a collection of EMG training data using the set of EMG signals, retraining a machine learning model to detect EMG signals produced by the individual user based on the updated collection of EMG training data, and using the retrained model. Doing so would be beneficial, as this would improve model performance if the system is performing poorly with the initial model (Col. 62, Lines 58-62). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Tadi et al. (US 2023/0418380 A1): detecting facial expressions utilizing EMG signals (Fig. 12B) Liang et al. (NPL A Variable-speech Silent Speech Recognition Method based on Surface Electromyography Signal): collection of EMG silent speech signals (Fig. 2, experimental paradigm) Any inquiry concerning this communication or earlier communications from the examiner should be directed to CODY DOUGLAS HUTCHESON whose telephone number is (703)756-1601. The examiner can normally be reached M-F 8:00AM-5:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre-Louis Desir can be reached at (571)-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CODY DOUGLAS HUTCHESON/Examiner, Art Unit 2659 /PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Apr 25, 2024
Application Filed
Mar 11, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603096
VOICE ENHANCEMENT METHODS AND SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12591750
GENERATIVE LANGUAGE MODEL UNLEARNING
2y 5m to grant Granted Mar 31, 2026
Patent 12579447
TECHNIQUES FOR TWO-STAGE ENTITY-AWARE DATA AUGMENTATION
2y 5m to grant Granted Mar 17, 2026
Patent 12537018
METHOD AND SYSTEM FOR PREDICTING A MENTAL CONDITION OF A SPEAKER
2y 5m to grant Granted Jan 27, 2026
Patent 12530529
DOMAIN-SPECIFIC NAMED ENTITY RECOGNITION VIA GRAPH NEURAL NETWORKS
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+47.1%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 24 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month