Prosecution Insights
Last updated: April 19, 2026
Application No. 18/151,832

System And Method For Measuring Human Intention

Non-Final OA §103
Filed
Jan 09, 2023
Examiner
VO, HUYEN X
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Cerebian Inc.
OA Round
3 (Non-Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
869 granted / 1043 resolved
+21.3% vs TC avg
Strong +20% interview lift
Without
With
+19.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
17 currently pending
Career history
1060
Total Applications
across all art units

Statute-Specific Performance

§101
24.9%
-15.1% vs TC avg
§103
33.0%
-7.0% vs TC avg
§102
23.7%
-16.3% vs TC avg
§112
5.7%
-34.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1043 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/17/2026 has been entered. Response to Arguments Applicant’s arguments have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 5-8, 11, 13, and 15-18 are rejected under 35 U.S.C. 103 as being unpatentable over Yun in view of Ayya (WO2018/141061, hereinafter Ayyad). Regarding claims 1 and 11, Yun discloses a method of and device for generating speech from human intent comprising: at least one sensor for measuring signals (figure 1 and/or paragraph 23, “receiving speech command of a user” suggest a microphone or other means for receiving speech input); a processor (figure 1); at least one or more deep learning modules (paragraphs 37-39, neural network); a wearable portion comprising the at least one sensor (figure 1 and/or paragraphs 22); and a memory storing computer-executable instructions that, when executed by the processor (figure 1), cause the device to: performing a training phase comprising training one or more deep learning modules on a first dataset collected from a first user (paragraphs 37-40, these neural-network-based models have already been trained by at least data of a first user before deployment); and performing a deployment phase for a second user (at deployment phase, these NN-based models are put to use) comprising: calibrating the trained one or more deep learning modules for the second user by retraining at least one, but not all, layers of the one or more deep learning modules using a second data set collected from the second user, wherein the second data set is smaller than the first dataset (paragraphs 79 and 87-88, retrain only some layers of NN, not all layers); sensing signals (paragraphs 28-31, receiving speech input); processing the signals using the one or more deep learning modules (paragraphs 28-31, receiving speech input); and converting the processed signals into an output (paragraphs 28-31, receiving speech input); wherein the signals comprise voluntary intentions (paragraphs 28-31, receiving speech input, which “voluntary intentions”). Yun fails to explicitly disclose, however, Ayyad teaches that the sensed signals are biological signals comprising at least one of brain signals and muscle signals associated with speech production (abstract section and/or paragraph 7, “brain activity”); processing the biological signals using one or more deep learning modules directly without applying fixed signal processing algorithms (abstract section and/or paragraph 7, “providing the plurality of signals, without pre-processing, to a processing system comprising at least one deep learning module, the at least one deep learning module being configured to process the signals to generate at least one capability”); wherein the biological signals comprise voluntary intentions to speak or generate sound measured before actual generation of the intended sound (paragraph 41 and 159-160, “Brain-to-speech”). Since Yun and Ayya are analogous in the art because they are from the same field of endeavor, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to use the known techniques of obtaining brain signals directly without pre-processing to generate speech. One of ordinary skill in the art would have recognized that the results of the combination were predictable since the use of that known technique provides the rationale to arrive at a conclusion of obviousness. See KSR International Co. v. Teleflex Inc., 82 USPQ2d 1385 (U.S. 2007). Regarding claims 3 and 13, Ayyad further discloses wherein processing the biological signals comprises processing raw, non-pre-processed signals directly at the one or more deep learning modules, without applying the fixed signal processing algorithms (abstract section and/or paragraph 7, “providing the plurality of signals, without pre-processing, to a processing system comprising at least one deep learning module, the at least one deep learning module being configured to process the signals to generate at least one capability”). Since Yun and Ayyad are analogous in the art because they are from the same field of endeavor, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to use the known techniques of obtaining brain signals directly without pre-processing to generate speech. One of ordinary skill in the art would have recognized that the results of the combination were predictable since the use of that known technique provides the rationale to arrive at a conclusion of obviousness. See KSR International Co. v. Teleflex Inc., 82 USPQ2d 1385 (U.S. 2007). Regarding claims 5 and 15, Yun further discloses wherein the output is text or automatically generated speech (paragraphs 24-25; also see tables 1-2, output can be text or speech). Regarding claims 6 and 16, Yun fails to explicitly disclose, however, Ayyad further teaches wherein the source is localized in auditory areas of the brain (paragraphs 169 and 172). Since Yun and Ayyad are analogous in the art because they are from the same field of endeavor, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention that the source is localized in the auditory areas of the brain. One of ordinary skill in the art would have recognized that the results of the combination were predictable since the use of that known technique provides the rationale to arrive at a conclusion of obviousness. See KSR International Co. v. Teleflex Inc., 82 USPQ2d 1385 (U.S. 2007). Regarding claims 7-8 and 17-18, Yun further discloses wherein sounds of words are provided to the deep learning modules at the training phase (paragraphs 79 and 88, training NN with speech initially and also at runtime); wherein text corresponding to words is provided to the deep learning modules at the training phase (paragraphs 37 and 77, training NN model with text). Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Yun in view of Ayya, and further in view of Tran. Regarding claims 4 and 14, the combination of Yun and Ayyad further discloses wherein the brain signals are sourced, at least in part, from auditory areas of the brain of the first and/or second user (Ayyad: paragraph 43, “electrodes recorded the electrical activity in the user's brains, body parts such as arms, and hands”). The combination of Yun and Ayyad still fails to explicitly disclose, however, Tran further teaches wherein the signals comprise the combination of brain and muscle signals (paragraph 74, “electrodes recorded the electrical activity in the user’s brains, body parts such as arms, and hands”). Since the modified Yun and Tran are analogous in the art because they are from the same field of endeavor, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to use the known techniques obtain a combination of signals. One of ordinary skill in the art would have recognized that the results of the combination were predictable since the use of that known technique provides the rationale to arrive at a conclusion of obviousness. See KSR International Co. v. Teleflex Inc., 82 USPQ2d 1385 (U.S. 2007). Claims 9-10 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Yun in view of Ayyad, further in view of Tran, and further in view of Yang. Regarding claims 9-10 and 19-20, the modified Tran still fails to explicitly disclose, however, Yang teaches wherein sounds of words are labelled by the deep learning modules at the training phase (paragraphs 49 and 182, “unsupervised learning” method in which a label or correct answer is not provided, and the NN determines the answer on its own); wherein sounds of words are labelled prior to being provided to the deep learning modules at the training phase (paragraphs 49 and 182, “supervised learning” method in which a label or correct answer is provided). Since the modified Tran and Yang are analogous in the art because they are from the same field of endeavor, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to use the known techniques of supervised learning and unsupervised learning to train a NN. One of ordinary skill in the art would have recognized that the results of the combination were predictable since the use of that known technique provides the rationale to arrive at a conclusion of obviousness. See KSR International Co. v. Teleflex Inc., 82 USPQ2d 1385 (U.S. 2007). Allowable Subject Matter Claims 21-22 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Gallego (USPG 2019/0253812) teaches a method of using periauricular muscle signals to estimate direction of user’s auditory attention locus that is considered pertinent to the claimed invention. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUYEN X VO whose telephone number is (571)272-7631. The examiner can normally be reached M-F, 8-4. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HUYEN X VO/Primary Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Jan 09, 2023
Application Filed
Feb 07, 2025
Non-Final Rejection — §103
Aug 11, 2025
Response Filed
Oct 14, 2025
Final Rejection — §103
Feb 17, 2026
Request for Continued Examination
Feb 22, 2026
Response after Non-Final Action
Mar 13, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603083
ESTIMATION DEVICE, ESTIMATION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12596873
OPTIMIZATION OF RETRIEVAL AUGMENTED GENERATION USING DATA-DRIVEN TEMPLATES
2y 5m to grant Granted Apr 07, 2026
Patent 12586594
GUIDING AMBISONIC AUDIO COMPRESSION BY DECONVOLVING LONG WINDOW FREQUENCY ANALYSIS
2y 5m to grant Granted Mar 24, 2026
Patent 12579990
ENCODING DEVICE, DECODING DEVICE, ENCODING METHOD, AND DECODING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12572755
SYSTEM AND METHOD FOR AUGMENTING TRAINING DATA FOR NATURAL LANGUAGE TO MEANING REPRESENTATION LANGUAGE SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+19.9%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 1043 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month