Prosecution Insights
Last updated: April 19, 2026
Application No. 18/335,864

HUMAN AUGMENTATION PLATFORM USING CONTEXT, BIOSIGNALS, AND LANGUAGE MODELS

Non-Final OA §103§112
Filed
Jun 15, 2023
Examiner
GARNER, CASEY R
Art Unit
2123
Tech Center
2100 — Computer Architecture & Software
Assignee
Cognixion Corporation
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 7m
To Grant
87%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
184 granted / 261 resolved
+15.5% vs TC avg
Strong +17% interview lift
Without
With
+16.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
19 currently pending
Career history
280
Total Applications
across all art units

Statute-Specific Performance

§101
30.6%
-9.4% vs TC avg
§103
45.7%
+5.7% vs TC avg
§102
7.1%
-32.9% vs TC avg
§112
12.2%
-27.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 261 resolved cases

Office Action

§103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the Application filed on 06/15/2023. Claims 1-20 are pending in the case. Claims 1 and 11 are independent claims. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: context subsystem, biosignals subsystem, prompt composer, pre-trained Generative Artificial Intelligence (GenAI) model, and output stage in claim 1. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 Claim limitations “context subsystem,” “biosignals subsystem,” “prompt composer,” “pre-trained Generative Artificial Intelligence (GenAI) model,” and “output stage” invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. No clear association between the structure and the function can be found in the specification. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claim Rejections - 35 U.S.C. § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant are advised of the obligation under 37 C.F.R. § 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention. Claims 1, 2, 4-8, 10-12, 14-18, and 20 are rejected under 35 U.S.C. § 103 as being unpatentable over Mielke et al. (U.S. Pat. App. Pub. No. 2023/0135179, hereinafter Mielke) in view of Yang et al. (U.S. Pat. App. Pub. No. 2024/0412029, hereinafter Yang). As to independent claims 1 and 11, Mielke teaches: The system comprising (Title and abstract): a context subsystem configured to receive at least one of background material, sensor data, and other device data as information that is used in part to infer a context for a user (Paragraph 6, "The assistant system may create and store a user profile comprising both personal and contextual information associated with the user"); a biosignals subsystem configured to receive at least one of a physically sensed signal… (Paragraph 41, "the user may interact with the assistant system 140 by providing a user input (e.g., a verbal request for information regarding a current status of nearby vehicle traffic) to the assistant xbot via a microphone of the client system 130." The microphone reads on physically sensed signal from the user); a prompt composer configured to receive an input from at least one of the context subsystem and the biosignals subsystem to generate a prompt that identifies at least one of a requested output modality and a desired output modality (Paragraph 82, "The render output module 232 may determine how to render the output in a way that is suitable for the client system 130"); a pre-trained Generative Artificial Intelligence (GenAI) model configured to utilize the prompt to generate a multimodal output (Paragraph 41, "The assistant application 136 may then present the responses to the user at the client system 130 via various modalities (e.g., audio, text, image, and video)." Paragraph 41, "(e.g., displaying a text-based push notification and/or image(s) illustrating a local map of nearby vehicle traffic on a display of the client system 130"); an output stage configured to transform the multimodal output into at least one form of user agency, user capability augmentation, and combinations thereof (Paragraph 82, "the response may be rendered as augmented-reality data for enhancing user experience"); and logic to: tokenize the at least one of the background material, the sensor data, and the other device data into context tokens suitable to prompt the GenAI model (Paragraph 266, "The assistant system 140 may simply “prompting” an off-the-shelf pre-trained language model with the tokenized example dialogues."); tokenize the at least one of the physically sensed signal and the neurologically sensed signal into biosignal tokens suitable to prompt the GenAI model (Paragraph 349, "determine what words were spoken by the user"); generate a context prompt from at least one of the context tokens and the biosignal tokens (Paragraph 266, "For example, the prompt may be “share the tourist office location with Andy. Right?”"); prompt the GenAI model with the context prompt and receive the multimodal output from the GenAI model (Paragraph 41, "(e.g., displaying a text-based push notification and/or image(s) illustrating a local map of nearby vehicle traffic on a display of the client system 130"); and transform the multimodal output into the at least one form of the user agency, the user capability augmentation, and combinations thereof (Paragraph 82, "the response may be rendered as augmented-reality data for enhancing user experience"). Mielke does not appear to expressly teach a neurologically sensed signal from the user. Yang teaches a neurologically sensed signal from the user (Paragraph 35, "predict the affective states (e.g., emotional sentiments) of the user 210 from cognitive data (e.g., electroencephalogram (EEG) data), physiological data (e.g., heart rate, perspiration rate, body temperature, etc.)". Paragraph 69.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the smart assistant of Mielke to include the AI techniques of Yang to include the crucial part of human communication, i.e., all the contextual information that typically accompanies the verbal communication (see Yang at paragraph 17). As to dependent claims 2 and 12, Mielke further teaches the context subsystem receives the at least one of the sensor data and the other device data from at least one of a camera and a microphone array (Paragraph 42, "the non-audio user inputs may be specific visual signals detected by a low-power sensor (e.g., camera) of client system 130." Paragraph 62, "e.g., microphone"). As to dependent claims 4 and 14, Yang further teaches the biosignals subsystem receives data from biometric sensors for at least one of electroencephalography (EEG), electrocorticography (ECoG), electrocardiogram (ECG or EKG), electromyography (EMG), electrooculography (EOG), pulse determination, heart rate variability determination, blood sugar sensing, and dermal conductivity determination (Paragraph 35, "predict the affective states (e.g., emotional sentiments) of the user 210 from cognitive data (e.g., electroencephalogram (EEG) data), physiological data (e.g., heart rate, perspiration rate, body temperature, etc.)". Paragraph 69.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the smart assistant of Mielke to include the AI techniques of Yang to include the crucial part of human communication, i.e., all the contextual information that typically accompanies the verbal communication (see Yang at paragraph 17). As to dependent claims 5 and 15, Mielke further teaches the prompt composer constructs at least one of: a single token; a string of tokens; a series of conditional or unconditional commands suitable to prompt the GenAI model; tokens that identify at least one of the requested output modality and the desired output modality; an embedding to be provided separately to the GenAI model for use in an intermediate layer of the GenAI model; and multiple tokenized sequences at once that constitute a series of conditional commands (Paragraph 266, "For example, the prompt may be “share the tourist office location with Andy. Right?”"). As to dependent claims 6 and 16, Yang further teaches the pre-trained GenAI model is at least one of large language models (LLMs), Generative Pre-trained Transformer (GPT) models, text-to-image creators, visual art creators, and generalist agent models (Paragraph 114, "the generative AI (i.e., the LLM)"). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the smart assistant of Mielke to include the AI techniques of Yang to include the crucial part of human communication, i.e., all the contextual information that typically accompanies the verbal communication (see Yang at paragraph 17). As to dependent claims 7 and 17, Mielke further teaches the output stage is configured to receive an output mode selection signal from the user through biosignals, wherein the output mode selection signal at least one of: instructs the output stage of a choice between the multimodal outputs; and instructs the output stage to direct one or more of alternative multimodal outputs to alternate endpoints (Paragraph 114, "the CU composer 370 may also determine a modality of the generated communication content using the UI payload generator 374"). As to dependent claims 8 and 18, Mielke further teaches the multimodal output is in the form of at least one of text-to-speech utterances, written text, multimodal artifacts, other user agency supportive outputs, and commands to a non-language user agency device (Paragraph 112, "a text-to-speech (TTS) component 390." Paragraph 41, "may then present the responses to the user at the client system 130 via various modalities (e.g., audio, text, image, and video)"). As to dependent claim 10, Mielke further teaches an encoder/parser framework for additionally encoding multimodal output; and logic to: provide control commands to control at least one of: a non-language user agency device; a robot system; and smart AI-powered devices (Paragraph 43, "the client system 130, the rendering device 137, and the companion device 138 may operate as a smart assistant device"). Claims 3 and 13 are rejected under 35 U.S.C. § 103 as being unpatentable over Mielke in view of Yang and Xiaofan Jia et al. (Jia, Xiaofan, Sadeed Bin Sayed, Nahian Ibn Hasan, Luis J. Gomez, Guang-Bin Huang, and Abdulkadir C. Yucel. "DeeptDCS: Deep Learning-Based Estimation of Currents Induced During Transcranial Direct Current Stimulation." arXiv preprint arXiv:2205.01858 (2022), hereinafter Xiaofan Jia). As to dependent claims 3 and 13, the respective rejections of claim 1 and 11 are incorporated. Mielke does not appear to expressly teach the at least one form of the user agency includes neural stimulation to the user with Transcranial Direct Current Stimulation (tDCS). Xiaofan Jia teaches the at least one form of the user agency includes neural stimulation to the user with Transcranial Direct Current Stimulation (tDCS) (Page 2, paragraph spanning left and right column, page 3, right column, paragraph 1). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the smart assistant of Mielke to include the tDCS techniques of Xiaofan Jia to solve the problem of how to identify an alternative output modality. Claims 9 and 19 are rejected under 35 U.S.C. § 103 as being unpatentable over Mielke in view of Yang and Coursey et al. (U.S. Pat. No. 11,645,479, hereinafter Coursey). As to dependent claims 9 and 19, the respective rejections of claim 1 and 11 are incorporated. Mielke does not appear to expressly teach the output stage including an output adequacy feedback system, including logic to: detect an event related potential (ERP) in response to a multimodal output suggestion; on condition the ERP is detected, performing at least one of: provide feedback to at least one of: the user; and the prompt composer, wherein the prompt composer provides the feedback to the GenAI model; wherein the feedback includes at least one of the ERP and a current context state; record the ERP to the multimodal output suggestion; automatically reject the multimodal output suggestion, generate new prompts with rejection feedback tokens, and send the rejection feedback tokens to the prompt composer; and on condition no ERP is detected: allow the multimodal output suggestion to proceed. Coursey teaches the output stage including an output adequacy feedback system, including logic to: detect an event related potential (ERP) in response to a multimodal output suggestion; on condition the ERP is detected, performing at least one of: provide feedback to at least one of: the user; and the prompt composer, wherein the prompt composer provides the feedback to the GenAI model; wherein the feedback includes at least one of the ERP and a current context state; record the ERP to the multimodal output suggestion; automatically reject the multimodal output suggestion, generate new prompts with rejection feedback tokens, and send the rejection feedback tokens to the prompt composer; and on condition no ERP is detected: allow the multimodal output suggestion to proceed (Column 14, penultimate paragraph). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the smart assistant of Mielke to include the AI techniques of Coursey to provide a more successful virtual agent for language interactions (see Coursey at column 1, lines 40 and 41). Citation of Pertinent Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Zhang et al. (U.S. Pat. App. Pub. No. 2024/0242442) teaches a system for supplementing user perception and experience via augmented reality (AR), artificial intelligence (AI), and machine-learning (ML) techniques is described. The system may include a processor and a memory storing instructions. The processor, when executing the instructions, may cause the system to receive data associated with at least one of a location, context, or setting and determine, using at least one artificial intelligence (AI) model and at least one machine learning (ML) model, relationships between objects in the at least one of the location, context, or setting. The processor, when executing the instructions, may then apply an artificial intelligence (AI) agent to analyze the relationships and generate a three-dimensional (3D) mapping of the at least one of the location, context, or setting and provide an output to aid a user's perception and experience. Conclusion The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Casey R. Garner whose telephone number is 571-272-2467. The examiner can normally be reached Monday to Friday, 8am to 5pm, Eastern Time. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached on 571-270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR to authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /Casey R. Garner/Primary Examiner, Art Unit 2123
Read full office action

Prosecution Timeline

Jun 15, 2023
Application Filed
Feb 09, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596937
METHOD AND APPARATUS FOR ADAPTING MACHINE LEARNING TO CHANGES IN USER INTEREST
2y 5m to grant Granted Apr 07, 2026
Patent 12585994
ACCURATE AND EFFICIENT INFERENCE IN MULTI-DEVICE ENVIRONMENTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579451
MINIMAL UNSATISFIABLE SET DETECTION APPARATUS, MINIMAL UNSATISFIABLE SET DETECTION METHOD, AND COMPUTER-READABLE RECORDING MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12572822
FLEXIBLE, PERSONALIZED STUDENT SUCCESS MODELING FOR INSTITUTIONS WITH COMPLEX TERM STRUCTURES AND COMPETENCY-BASED EDUCATION
2y 5m to grant Granted Mar 10, 2026
Patent 12573187
Self-Learning in Distributed Architecture for Enhancing Artificial Neural Network
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
87%
With Interview (+16.8%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 261 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month