Prosecution Insights
Last updated: April 19, 2026
Application No. 17/441,201

SYSTEM AND METHOD FOR ENHANCING OR REHABILITATING THE COGNITIVE SKILLS OF A SUBJECT

Final Rejection §103§112
Filed
Sep 20, 2021
Examiner
MERRIAM, AARON ROGERS
Art Unit
3791
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Restorative Neurotechnologies S R L
OA Round
2 (Final)
25%
Grant Probability
At Risk
3-4
OA Rounds
3y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 25% of cases
25%
Career Allow Rate
5 granted / 20 resolved
-45.0% vs TC avg
Strong +88% interview lift
Without
With
+88.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
56 currently pending
Career history
76
Total Applications
across all art units

Statute-Specific Performance

§101
7.6%
-32.4% vs TC avg
§103
44.3%
+4.3% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
30.5%
-9.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 20 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Applicant' s arguments, filed 8/1/2025, have been fully considered. The following rejections and/or objections are either reiterated or newly applied. They constitute the complete set presently being applied to the instant application. Applicants have amended their claims, filed 8/1/2025, and any new grounds of rejection made in the instant office action have been necessitated by amendment. Claims 19-38 are the currently pending claims currently under examination. Claims 21, 25-27, 31-32, and 35-36 have been amended, with claims 37-38 newly added. Claim Objections Claim 32 is objected to because of the following informalities: In claim 32, line 6, "to receiving means" should be "to a receiving means"; Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that use the word “means,” and are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Such claim limitation(s) is/are: Claim 32, lines 4-6 recites "receiving means coupled to said optical instrument, said receiving means configured to convert" but the specification in [0053]–[0054] only states that it is "not shown" and does not provide corresponding structural details. Without clear structure, the claim is indefinite under 35 U.S.C. § 112(f). Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 19-38 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as failing to set forth the subject matter which the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the applicant regards as the invention. Claim 19 recites "a visual target stimulus" in line 3 and "a predetermined sequence of visual target stimuli" in line 6, but it is unclear if these stimuli are the same, related, or different, rendering the claim indefinite. The Examiner is interpreting the stimuli to either be the same visual target at multiple positions or multiple distinct targets presented over time. Claims 20-34 and 37-38 are rejected by virtue of their dependence from claim 19. Claim 19 recites "aiming of the visual target stimulus" in line 7, while also referring to "aiming movements" and "aiming position" in lines 16-17; in the first iteration it is unclear whether the subject is aiming at the stimulus or the stimulus itself is somehow being aimed, rendering the claim indefinite; in the subsequent iterations it is unclear if these are the same as, related, or different aiming than the initial instance. The Examiner is interpreting "aiming of the visual target stimulus" to mean that the subject is directing an action (e.g., a touch or pointer) toward the displayed stimulus. If different iterations of these terms are meant to indicate the same thing, they should clearly be identified as such and should use proper antecedent basis. Claim 25 recites "substantial coincidence" in line 4, but the phrase lacks objective boundaries or a quantifiable threshold by which coincidence is evaluated, rendering the claim indefinite. Additionally, the specification does not provide a clear standard for measuring this coincidence. The Examiner is interpreting the term to mean that the subject's input is close enough to the displayed target to meet a system-defined tolerance. Claim 32 recites “receiving means coupled to said optical instrument” in line 4, but it is unclear what the receiving means is, what structure it includes, or how it receives and converts the electromagnetic signal into an acoustic stimulus, rendering the claim indefinite. The specification in ¶[0053]–[0054] refers to the receiving means as “not shown” and provides no details about its components, operation, or implementation. The Examiner is interpreting “receiving means” to refer to an unspecified electronic module integrated into or attached to the optical instrument for converting received signals into acoustic output such as an audio speaker or bone conduction headphones. Claims 33 and 34 are rejected by virtue of their dependence from claim 32. Claim 35 recites “a visual target stimulus” in line 4 and “a predetermined sequence of visual target stimuli” in line 6, but it is unclear if these stimuli are the same, related, or different, rendering the claim indefinite. The Examiner is interpreting the stimuli to either be the same visual target or multiple distinct targets presented over time. Claim 36-38 is rejected by virtue of its dependence from claim 35. Claim 35 recites “aiming of the visual target stimulus” in step e), while also referring to “aiming movements” and “aiming position” in steps f) and g); it is unclear whether these refer to the same action by the subject or represent distinct components of the user’s interaction with the system, rendering the claim indefinite. The Examiner is interpreting these terms to refer collectively to the subject’s manual or sensor-based attempt to align their input with the visual target. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 19-20, 22-25, and 28-29 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US 20180143456 A1), hereto referred as Chen, and further in view of Ishikawa et al. (US 20150320350 A1), hereto referred as Ishikawa, and further in view of Grassi (US 20160349533 A1), hereto referred as Grassi. Regarding claim 19, Chen teaches that a system for cognitive enhancement or rehabilitation of a subject, the system comprises: a screen for displaying a visual target stimulus (Chen, ¶[0031]: "The depicted eyewear 60 comprises... left and right video display screens 70 and 75" and [0004]: "During a treatment session, the patient wears prism goggles while performing tasks such as reaching to targets", disclosing screens positioned in front of each eye that display video to the patient of visual targets); an electronic image generation unit (Chen, ¶[0031]): “The system 50 comprises eyewear 60 coupled to a processor 90, such as a video processor”, describing an electronic image generation unit), aiming sensor means configured to detect aiming of the visual target stimulus by the subject (Chen, ¶[0034]: “These processed video images correspond to images captured by the cameras 80 and 85, respectively, processed by the video processor to emulate the required visual field occlusion in the screens 70 and 75, blacking out of substantial portions of the patient’s arm movements…”, Chen discloses that video cameras capture the patient’s arm movements during aiming, and the video processor interprets this input. The captured motion allows the system to monitor where the patient is aiming in relation to visual targets; ¶[0059]: “The mean of the post-prism pointing errors should be a smaller value or more negative than the mean of the pre-prism pointing errors, indicating a leftward shift of pointing behavior. This is called prism aftereffect", establishing that the system compares the patient’s aiming direction (as detected from arm movement) with the known target location to determine the degree of deviation, supporting that the system uses visual input as a functional aiming sensor; Figure 6: The graph labeled “Aiming” shows that the system isolates the subject’s intended motor response direction. This demonstrates that Chen not only detects the subject's motor response (aiming) toward the target but also processes this information independently of perception and in association with visual stimuli); an optical instrument wearable by said subject, comprising at least one prismatic lens configured to divert a beam of light rays coming from the visual target stimulus so as to induce a vision perturbation of the visual target stimulus to the subject (Chen, FIG. 5 and ¶[0036]: "The required prism effect of the processed signals supplied to the video screens 70 and 75 may be implemented by the video processor emulation or prisms disposed proximate lens of cameras 80 and 85. Specifically, the eyewear 60 may use selectively rotatable or substitutable prisms proximate the camera lenses to achieve the desired prism angle and direction", describing prism-equipped eyewear with selectable magnitude and direction of field shift, using a frame (i.e. housing) that holds rotatable lenses); recording means for recording aiming movements and/or an aiming position by the subject in association with each visual target stimulus; (Chen, ¶[0042]: "The treatment typically begins by determining a patient's pre-exposure baseline measurement of pointing performance. After the baseline is measured... followed by measurement of post-exposure after-effect of adaptation persistence", Chen discloses baseline and follow-up recordings of pointing behavior, showing that performance is tracked in association with stimulus presentation; Chen also uses cameras to monitor movement: ¶[0034] discloses that “processed video images correspond to images captured by the cameras 80 and 85,” allowing the system to monitor aiming behavior; Figure 6 provides graphical output of “Aiming” and “Where + Aiming,” confirming that these metrics are recorded for analysis across treatment sessions); processing means for determining an amount of deviation of said aiming position with respect to a displayed position of the visual target stimulus; (Chen, ¶[0042]: "The treatment typically begins by determining a patient's pre-exposure baseline measurement of pointing performance. After the baseline is measured, eyewear comprising at least one lens having a prism mounted thereon is donned by the patient... followed by measurement of post-exposure after-effect of adaptation persistence", Chen describes a process in which the patient's aiming behavior is measured both before and after prism exposure to assess change; ¶[0059] further explains: "The mean of the post-prism pointing errors should be a smaller value or more negative than the mean of the pre-prism pointing errors, indicating a leftward shift of pointing behavior", these comparisons show that an amount of deviation is calculated and the deviation is connoted by comparing the known target location with the pointing behavior captured by the system; ¶[0034], clarifies that cameras 80 and 85 capture arm movement, and that the video processor processes this data to generate the display content, confirming that the processor receives and analyzes aiming movement; Figure 6 demonstrates that deviation data is categorized and tracked, supporting that deviation is calculated based on aiming relative to target stimulus); and storage means for storing data indicative of a cognitive profile of the subject and of an orientation of the at least one prismatic lens suitable for inducing a predetermined vision perturbation of the visual target stimulus to the subject associated with a predetermined therapeutic effect on the subject; (Chen, ¶[0045]: "To detect the presence and measure the severity of spatial neglect, the Kessler Foundation Neglect Assessment Process (KF-NAP) was used in combination with the Catherine Bergego Scale (CBS) to ensure optimal sensitivity to the presence of neglect in functional activities."; Chen describes performance assessments that indicate a cognitive profile, and these are used in treatment decisions. ¶[0036] states: "In one embodiment, the selectable prismatic effect is to select the magnitude and the direction of a visual-field shift."; This shows that the system allows clinicians to configure the prism angle and direction. While Chen does not explicitly state that the configuration is stored, Figure 6 illustrates that multiple types of performance data are tracked over time, and ¶[0042] confirms both pre- and post-treatment values are measured, implying data persistence. A person of ordinary skill in the art would understand that in order to compare post-exposure deviation to baseline and determine persistence of adaptation, the system or clinician would retain the relevant assessment values and prism settings applied. Thus, Chen implies that data indicative of cognitive profile and the associated prism orientation are stored to guide ongoing therapy and track therapeutic effect). Chen does not teach that the electronic image generation unit is programmed to generate a predetermined sequence of visual target stimuli for focusing the subject's gaze, in a variable position on an area of the screen. Rather, Chen discloses a video-based therapeutic system that presents processed real-time images to the patient via wearable display screens, including content during therapy tasks such as reaching to targets (Chen, ¶[0034]; ¶[0004]). However, Chen does not disclose generating a predetermined sequence of visual target stimuli in variable screen positions for the purpose of focusing the subject’s gaze. Ishikawa, who uses prismatic glasses in a brain function evaluation system, explicitly discloses a system that displays a predetermined number of visual marks (or targets) on a screen in randomly determined positions, and presents them repeatedly in a structured sequence to the user for the purpose of evaluating visuomotor performance, such as deviation and adaptation (Ishikawa, ¶[0034], [0031]). One of ordinary skill in the art would have found it obvious to incorporate Ishikawa’s method of presenting spatially and temporally varied visual targets into the Chen system, enabling controlled presentation of stimuli on the display screens already described in Chen. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chen in view of Ishikawa to generate a predetermined sequence of visual target stimuli for focusing the subject’s gaze in a variable position on an area of the screen. The benefit of this combination would be to enable Chen’s system to perform structured therapeutic protocols with dynamic and spatially distributed target stimuli, improving diagnostic accuracy and rehabilitation outcomes in patients with spatial neglect. Chen does not fully disclose that said optical instrument further comprising a housing seat for said at least one prismatic lens, configured to rotatably support said at least one prismatic lens. Rather, Chen discloses eyewear having a frame and rotatable prisms, but it does not explicitly mention a structural housing seat within the frame to support lens rotation (Chen, ¶[0031]). Grassi, who investigates prismatic glasses, expressly teaches a frame structure that includes housing seats configured to rotatably support prismatic lenses (Grassi, FIG. 4, ¶[0007]-[0008], ¶[0015]-[0016]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the eyewear in Chen to incorporate the housing seat structure taught in Grassi to rotatably support the prismatic lenses. The benefit of this modification would be to provide a defined, stable mechanism for integrating rotatable prismatic lenses within a wearable frame, improving usability and repeatability of therapeutic lens adjustments. Chen does not explicitly disclose that said processing means being configured to determine which orientation of the at least one prismatic lens is necessary to induce the predetermined therapeutic effect according to the cognitive profile of the subject. Rather, Chen discloses a cognitive rehabilitation system that presents visual stimuli and tracks the user’s interaction with those stimuli to assess visuomotor alignment (Chen, ¶[0042], ¶[0059]). A wearable prismatic optical instrument is used to shift the user’s perceived field of view, with selectable prism direction and magnitude (Chen, ¶[0036]). Chen discloses that prism orientation is chosen based on the nature of the spatial neglect and related cognitive impairments, and performance data (e.g., pointing error, CBS scores) are tracked across time (Chen, ¶[0045], Figure 6). Prism adaptation treatment (PAT) is employed, and weekly progress assessments help clinicians guide therapy (Chen, ¶[0042]-[0048]). While Chen does not disclose an autonomous processor selecting prism orientation, the system functionally determines therapeutic orientation by associating assessment data with a chosen lens configuration (Chen, ¶[0036]). One of average skill in the art would be able to use the assessment data coupled with the clinician’s determinizations as guidance to create a threshold based processing means to determine prismatic orientation to produce a predetermined therapeutic effect. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chen to specifically disclose a processing means configured to determine or recommend the orientation of the at least one prismatic lens based on the stored cognitive performance data of the subject. The benefit of this modification would be to enable automated or clinician-assisted therapeutic tailoring, improving the precision and consistency of prism orientation selection for treating spatial neglect and related impairments. Regarding claim 20, Chen does not explicitly teach that the processing means are configured to compare the amount of deviation of the aiming position with respect to the displayed position of the visual target stimulus with the predetermined vision perturbation associated with a predetermined therapeutic effect on the subject as a function of the cognitive profile stored by the storage means. Rather, Chen discloses that prism therapy begins with measurement of pointing error and uses this deviation data to assess whether the desired prism effect (e.g., a therapeutic leftward shift) has occurred (Chen, ¶[0042], ¶[0059]). The magnitude of this shift is evaluated against the initial cognitive condition of the patient (Chen, ¶[0045]), which determines whether a therapeutic effect is taking place. Although Chen does not explicitly describe a processor performing the comparison step or the step taking place during therapy, the system functionally operates in that manner: it evaluates pointing deviation following prism exposure and determines whether a therapeutic shift has occurred based on the patient’s cognitive condition (Chen, ¶[0042], ¶[0045], ¶[0059]). Ishikawa reinforces this structure by teaching a divergence quantity calculation unit and evaluation unit that automate deviation computation and evaluate the user’s deviation against a normative reference, such as divergence statistics from healthy individuals, to assess performance relative to an expected behavioral standard (Ishikawa, ¶[0037]–[0038]). Ishikawa’s architecture of separating divergence calculation (unit 16) from evaluation (unit 17) reflects a processing pipeline that would be readily adaptable to Chen’s system, where pointing deviation is already tracked and interpreted in light of cognitive condition. One of ordinary skill in the art would readily understand that Chen’s system inherently performs a comparison of measured deviation against a predetermined therapeutic expectation shaped by the subject’s diagnostic cognitive profile (Chen, ¶[0045]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chen in view of Ishikawa to include a processing means configured to compare the measured deviation with the expected vision perturbation associated with a predetermined therapeutic effect for the subject’s cognitive profile. The benefit of this combination would be to enable adaptive and automated therapy decisions, reducing subjectivity in assessment and improving consistency in treatment personalization based on performance metrics and cognitive condition. Regarding claim 22, Chen teaches that the cognitive profile of the subject includes information indicative of state of health and/or disturbances felt by the subject (Chen, ¶[0045]: “To detect the presence and measure the severity of spatial neglect, the Kessler Foundation Neglect Assessment Process (KF-NAP) was used in combination with the Catherine Bergego Scale (CBS)...”, Chen discloses cognitive assessment tools that evaluate both the presence and severity of spatial neglect, a condition arising from neurological health disturbances; ¶[0048]: “For patients with left-sided spatial neglect (after right-brain damage), prism lenses with the thicker part on the left could be used.”; This confirms that Chen uses diagnostic information about neurological deficits—i.e., disturbances related to brain injury—to guide therapy). Regarding claim 23, Chen does not fully teach that said electronic image generation unit is programmed to generate said predetermined sequence of visual target stimuli alternatively in a central position on the area of the screen, in a position belonging to a first lateral portion of the screen and in a position belonging to a second lateral portion of the screen opposite to the first lateral portion with respect to an axis of vertical symmetry of the screen. While movement of the target is implied, Chen does not disclose changing the position of visual targets between center and opposite lateral screen portions. Ishikawa clearly teaches a system that varies the position of a displayed mark across central and peripheral screen areas (Ishikawa, ¶[0034], [0056]). One of ordinary skill in the art would recognize that applying Ishikawa’s dynamic target positioning within Chen’s therapeutic interface would enhance the system’s ability to assess spatial awareness and guide prism adaptation. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chen in view of Ishikawa to include a display control mechanism programmed to generate visual stimuli at alternating central and lateral screen positions. Ishikawa teaches selecting target positions at random across the screen, and also shows that new stimuli are displayed in different positions from previous ones (Ishikawa, ¶[0034], [0056]). While Ishikawa does not explicitly disclose alternating between a central position and two opposing lateral positions, it teaches random and repeated repositioning of visual marks across the display. From this teaching, one of ordinary skill in the art would recognize that a structured left-center-right sequence is an obvious variation to support spatial training. A sequence alternating between a central and two lateral screen positions enables balanced engagement of visual fields across the sagittal axis, which is critical in therapies for conditions like spatial neglect that impair lateralized attention or perception. Such a structured approach to target presentation would be seen as a routine and beneficial variation on the random or arbitrary positioning taught in Ishikawa, especially in the context of spatial rehabilitation therapies, where structured lateral targeting is known to enhance training specificity and engagement for correcting spatial neglect. The benefit of this modification would be to strengthen diagnostic sensitivity for detecting lateralized visuospatial deficits and improve training accuracy for spatial correction during therapy. Regarding claim 24, Chen does not teach that the aiming sensor means include a mouse or a tactile surface of said screen. Chen does disclose a video-based system that uses cameras to detect pointing or reaching toward a target stimulus (Chen, ¶[0034], ¶[0042]). However, Chen does not disclose the use of a mouse or tactile surface such as a touchscreen for this purpose. Ishikawa, by contrast, explicitly discloses the use of a touchscreen that records the subject’s input position in response to the visual stimulus (Ishikawa, ¶[0046]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chen in view of Ishikawa to include an aiming sensor means such as a touchscreen configured to detect user input relative to a visual target on the screen. The benefit of this modification would be to enable direct, reliable, and intuitive user input by allowing subjects to respond to stimuli via physical touch, simplifying system interaction and improving precision of aiming detection. Regarding claim 25, Chen does not teach that the electronic image generation unit is controlled by said processing means to display the visual target stimulus on the screen for a variable time or as long as said processing means determine a substantial coincidence of said aiming position with respect to the displayed position of the visual target stimulus. Rather, Chen tracks and evaluates the user’s pointing performance relative to stimulus location (Chen, ¶[0058]), but does not disclose controlling stimulus display time based on that alignment. Ishikawa presents a system with discrete trials where visual stimuli are refreshed in new screen positions, reinforcing the concept of dynamic control of stimulus presentation, even though it lacks conditional display timing based on user input (Ishikawa, ¶[0056]). While neither explicitly teaches controlling display duration based on aiming coincidence, Figure 6 in Chen supports that the system already evaluates alignment between the target location and the user’s input by breaking down spatial performance into 'where-only', 'aiming-only', and combined 'where+aiming' conditions. This indicates the system tracks both perceptual localization of the target and motor execution of the aiming action. By evaluating these components separately and in combination, the system forms a more complete assessment of spatial processing deficits, demonstrating that it processes both the location of the visual stimulus and the subject's response to it. This alignment corresponds to the claimed “substantial coincidence,” as it captures how accurately the user’s aiming behavior converges on the target’s actual location—a measurement shown to be tracked and plotted in Chen’s system (Chen, FIG. 6). Given that the system already quantifies spatial misalignment over time, one of ordinary skill in the art would recognize that using this real-time alignment information to control the duration of stimulus display is a routine enhancement. It would simply involve comparing the deviation to a threshold value and terminating the stimulus once sufficient alignment is achieved. (i.e. variable time depends on substantial coincidence of the aiming itself). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chen in view of the general display control principles of Ishikawa to include logic for adjusting stimulus display time based on the processing means determining a substantial coincidence between the aiming position and the displayed target position. The benefit of this modification would be to align stimulus presentation with user responsiveness, enhancing training adaptability and optimizing patient engagement during therapeutic tasks. Regarding claim 28, Chen teaches that the optical instrument is a pair of glasses comprising a frame (Chen, ¶[0029], “eyewear include, for example, glasses and goggles”; ¶[0010]: “eyewear…comprising: a frame”), but Chen does not disclose that they include said housing seat of said at least one prismatic lens configured to rotatably support said at least one prismatic lens. Rather, Chen discloses eyewear having a frame and rotatable prisms, but it does not explicitly mention a structural housing seat within the frame to support lens rotation (Chen, ¶[0031]). Grassi, who investigates prismatic glasses, expressly teaches a frame structure that includes housing seats configured to rotatably support prismatic lenses(Grassi, FIG. 4, ¶[0007]-[0008], ¶[0015]-[0016]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the eyewear in Chen to incorporate the housing seat structure taught in Grassi to rotatably support the prismatic lenses. The benefit of this modification would be to provide a defined, stable mechanism for integrating rotatable prismatic lenses within a wearable frame, improving usability and repeatability of therapeutic lens adjustments. Regarding claim 29, Chen teaches that the optical instrument is a virtual reality viewer with at least one integrated prismatic lens (Chen, ¶[0036]: "Specifically, the eyewear 60 may use selectively rotatable or substitutable prisms proximate the camera lenses to achieve the desired prism angle and direction...", disclosing eyewear equipped with prisms to alter the visual field; ¶[0031]: "The system 50 comprises eyewear 60 coupled to a processor 90, such as a video processor... The depicted eyewear 60 comprises a frame 65 such as, for example, an eye glass or goggle frame having mounted thereon left and right video display screens 70 and 75" and [0011]: “the video processor configured to provide… video images including a selectable prismatic effect and blocking of substantial portions of arm movement of the patient”, Chen teaches a glasses or goggle-style frame with dual video display screens, a processor for video input, and prisms integrated into the optical path—features that align with a virtual reality (VR) viewer equipped with visual processing and prismatic displacement. The described system mimics a VR setup in which a subject views a processed, artificial visual environment through head-mounted displays, with prism-induced vision perturbations). Claims 21 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over by Chen et al. (US 20180143456 A1), hereto referred as Chen, and further in view of Ishikawa et al. (US 20150320350 A1), hereto referred as Ishikawa, and further in view of Grassi (US 20160349533 A1), hereto referred as Grassi, and further in view of Laffont et al. (US 20200051320 A1), hereto referred as Laffont. The combined Chen, Ishikawa, and Grassi teaches claims 19 and 20 as described above. Regarding claim 21, the combined Chen, Ishikawa, and Grassi do not fully teach that the processing means are configured to interact with the optical instrument to orient the at least one prismatic lens so as to induce said predetermined vision perturbation of the visual target stimulus to the subject associated with a predetermined therapeutic effect. Rather, Chen teaches that the prismatic lenses may be selectively rotated to achieve a desired prism angle and direction within a frame, which induces the visual shift needed for therapy (Chen, FIG. 5 and ¶[0036]). However, Chen does not disclose that the processing means are configured to perform this adjustment. Laffont, who investigates auto-adjusting glasses, discloses a system where a processing means controls an actuator to move optical lenses to achieve a desired configuration (Laffont, ¶[0013]: “The system... includes a processing means... instructing the optical system to reconfigure in response to the determination of the focus...”; ¶[0017]: “...a controller... moves at least two... lenses laterally over one another in a specific direction to generate either a positive spherical power change or a negative spherical power change...”; ¶[0199]: “...lens elements 1801 and 1802 are configured to be rotated in clockwise direction, as in movement 7; or counter-clockwise direction, as in movement 8, to change the cylinder axis. The lens element 1 or 2 can be rotated separately or in combination”). This confirms that a processing unit can operate a motorized actuator to orient lenses. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combined Chen, Ishikawa, and Grassi in view of Laffont to include an actuator mechanism controlled by the processing means to orient the prismatic lens. The benefit of this modification would be to provide programmable and repeatable adjustment of the prism’s orientation, improving the system's ability to deliver tailored vision perturbations aligned with the subject’s evolving therapeutic needs. Regarding claim 30, Chen does not expressly teach that said predetermined sequence of visual target stimuli comprises at least one of images of circular shape, alphabet letters, numbers, words, drawings. Rather, Chen discloses a cognitive rehabilitation system in which the patient performs pointing and visual tasks using processed video imagery shown on display screens embedded in wearable eyewear (Chen, ¶[0034]). However, Chen does not disclose a predetermined sequence of images, but does disclose the target may be a line or circle (Chen, ¶[0034]). Ishikawa, who also addresses visuomotor tasks for brain function evaluation using prism glasses, explicitly discloses that the system displays a mark as the visual target for the user to indicate, including a circular shape as shown in FIG. 3 (Ishikawa, FIG.3, ¶[0031], [0034]). It also explicitly discloses a system that displays a predetermined number of visual marks (or targets) on a screen in randomly determined positions, and presents them repeatedly in a structured sequence to the user for the purpose of evaluating visuomotor performance, such as deviation and adaptation (Ishikawa, ¶[0034], [0031]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chen in view of Ishikawa to include displaying a sequence of visual target stimuli comprising images of a circular shape or any design choice variation such as alphabet letters, numbers, words, or drawings. The benefit of this combination is that using known types of visual symbols such as shapes and letters enables standardized cognitive assessment and rehabilitation tasks with defined performance criteria. Incorporating such structured stimuli into Chen’s display would enhance the system’s capability to guide and evaluate specific cognitive responses, making therapy more targeted and effective. Claims 26-27 are rejected under 35 U.S.C. 103 as being unpatentable over by Chen et al. (US 20180143456 A1), hereto referred as Chen, and further in view of Ishikawa et al. (US 20150320350 A1), hereto referred as Ishikawa, and further in view of Grassi (US 20160349533 A1), hereto referred as Grassi, and further in view of Lewis (US 10962789 B1), hereto referred as Lewis. The combined Chen, Ishikawa, and Grassi teaches claim 1 described above. Regarding claim 26, the combined Chen, Ishikawa, and Grassi does not teach that at least one prismatic lens is configured to filter red, orange, or blue wavelengths of said beam of light rays coming from the visual target stimulus. The claim and specification do not elaborate on why the prismatic lens is configured to filter light or what cognitive mechanism is intended. The Examiner interprets the purpose as tied to modulation of mental state, perceptual tuning, or cognitive load in relation to neuroplasticity and therapeutic impact. Chen teaches the use of a prismatic lens (Chen, ¶[0010]), but does not specify any filtering effect. Lewis supports this interpretation by explicitly linking selective frequency filtering in eyewear to user performance, perceptual response, and mental state, particularly through dynamic systems that react to real-time sensor feedback (Lewis, Col. 30, Lines 19–34). Lewis further teaches reducing or filtering certain frequencies of light (e.g., blue light) to reduce overstimulation, glare, and perceptual strain, which may contribute to improved cognitive performance and decreased sensory interference (Lewis, Col. 4, Lines 44–61; Col. 7, Lines 20–55). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combined Chen, Ishikawa, and Grassi in view of Lewis to include wavelength-selective filtering for cognitive modulation. The benefit of this modification would be to allow the prism therapy system of Chen to not only shift the user’s visual field but also improve mental state, reduce overstimulation, and enhance perceptual clarity through targeted spectral filtering, improving the system’s therapeutic adaptability and efficacy. Regarding claim 27, Chen does not teach that at least one prismatic lens is configured to filter only light radiation of wavelength in a range comprised between 620 nm and 750 nm or in a range comprised between 450 nm and 480 nm. The claim and specification do not elaborate on why the prismatic lens is configured to filter light or what cognitive mechanism is intended. The Examiner interprets the purpose as tied to modulation of mental state, perceptual tuning, or cognitive load in relation to neuroplasticity and therapeutic impact. Chen teaches the use of a prismatic lens (Chen, ¶[0010]), but does not specify any filtering effect. Lewis supports this interpretation by explicitly linking selective frequency filtering in eyewear to user performance, perceptual response, and mental state, particularly through dynamic systems that react to real-time sensor feedback (Lewis, Col. 30, Lines 19–34). Lewis further teaches reducing or filtering certain frequencies of light (Lewis, Col. 12, Lines 54-67: "450-490 nm blue") to reduce overstimulation, glare, and perceptual strain, which may contribute to improved cognitive performance and decreased sensory interference (Lewis, Col. 4, Lines 44–61; Col. 7, Lines 20–55). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the lens system of the combined Chen, Ishikawa, and Grassi in view of Lewis to include filtering of light in the 450–480 nm range, as these are well-known therapeutic targets related to photophobia, perceptual tuning, and cognitive state. The benefit of this modification would be to target known spectral ranges for filtering that reduce visual overstimulation and improve perceptual clarity, thereby enhancing the therapeutic effectiveness of the prism-based cognitive rehabilitation system described in Chen. Claim 31 is rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US 20180143456 A1), hereto referred as Chen, and further in view of Ishikawa et al. (US 20150320350 A1), hereto referred as Ishikawa, and further in view of Grassi (US 20160349533 A1), hereto referred as Grassi, and further in view of Vyshedskiy (US 20190076670 A1), hereto referred as Vyshedskiy. The combined Chen, Ishikawa, and Grassi teaches claim 1 described above. Regarding claim 31, the combined Chen, Ishikawa, and Grassi does not expressly teach that the electronic image generation unit is configured to transmit to the optical instrument a signal representative of the visual target stimulus the signal having a frequency of 10 Hz or 40 Hz. Chen does disclose an image generation unit that processes visual stimuli and transmits them to eyewear screens for patient rehabilitation activities (Chen, ¶[0031]). However, Chen does not disclose that the visual target stimulus is transmitted at a frequency of 10 Hz or 40 Hz. Vyshedskiy teaches the use of cognitive stimuli presented alongside a light source that flickers at 40 Hz for therapeutic benefit in treating cognitive decline and improve the brain's function (Vyshedskiy, Abstract, ¶[0003]). Additionally, it teaches presenting images on a screen that update or flicker at 40 Hz for the same purpose (Vyshedskiy, ¶[0011]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combined Chen, Ishikawa, and Grassi in view of Vyshedskiy to configure the image generation unit to transmit a visual target stimulus signal at 40 Hz. The benefit of this combination is that incorporating a defined visual signal frequency, such as 40 Hz, is known to support brain stimulation and cognitive engagement (Vyshedskiy, Abstract, ¶[0003]). Using such a frequency in the Chen system does not involve any unexpected technical effect and is within the routine design choices of a person skilled in the art seeking to implement cognitive enhancement therapies through visual stimuli. Claims 32-34 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (US 20180143456 A1), hereto referred as Chen, and further in view of Ishikawa et al. (US 20150320350 A1), hereto referred as Ishikawa, and further in view of Grassi (US 20160349533 A1), hereto referred as Grassi, and further in view of Tissieres (Tissieres, Isabel et al. “For Better or Worse: The Effect of Prismatic Adaptation on Auditory Neglect.” Journal of neural transplantation & plasticity 2017, (2017): 1–11. Web.), hereto referred as Tissieres. The combined Chen, Ishikawa, and Grassi teaches claim 1 described above. Regarding claim 32, the combined Chen, Ishikawa, and Grassi does not fully teach that the system further comprises an electronic unit for generating an electromagnetic signal of an acoustic stimulus, configured to transmit said electromagnetic signal of an acoustic stimulus to receiving means coupled to said optical instrument, said receiving means being configured to convert said electromagnetic signal of an acoustic stimulus into at least one acoustic stimulus at least one frequency audible by the subject, for conditioning the subject in a way complementary to said visual target stimulus. Rather, Chen discloses a system wherein patients view visual target stimuli through wearable eyewear that processes and delivers therapeutic content (Chen, ¶[0031]). Chen also provides for auditory cues to support rehabilitation tasks (Chen, ¶[0053]–[0054]). However, it does not include an acoustic stimulus generator/receiver. Tissieres discloses the use of prismatic adaptation to treat symptoms of auditory neglect, including auditory extinction and localization deficits, through coordinated visual and auditory stimulation (Tissieres, Abstract). Auditory word pairs are delivered to each ear (Tissieres, Section 2.2, p. 3) and bumblebee sounds ranging from 20 to 10,000 Hz are presented through headphones at defined lateral positions (Tissieres, Section 2.3.3, p. 4). Participants respond by indicating sound location using a semicircle affixed to the headphone (Tissieres, Section 2.3.3, p. 4). The concept of producing auditory stimuli in a context of a prism adaptation setup is further described in Tissieres, which states that visual and auditory stimuli involve the same cortical regions and likely a shared attentional network showing complementary conditioning (Tissieres, Introduction, p. 1). Additionally, three of the six visuomotor activities in the prism adaptation procedure resulted in sound production (Tissieres, Section 2.2, p. 3). Although Tissieres does not state that the headphones are structurally coupled to the optical instrument, both devices are simultaneously worn on the head during the prismatic adaptation tasks. Therefore, the headphones and prismatic lenses are functionally coupled through the user’s head, enabling coordinated multimodal stimulation (Tissieres, Section 2.3.3, p. 4). One of ordinary skill in the art would have recognized that such co-located, head-mounted systems could be mechanically or electronically coupled if desired, as a straightforward and expected engineering option in wearable device design. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed i
Read full office action

Prosecution Timeline

Sep 20, 2021
Application Filed
Sep 20, 2021
Response after Non-Final Action
Mar 27, 2025
Non-Final Rejection — §103, §112
Aug 01, 2025
Response Filed
Sep 11, 2025
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12521065
SOCK WITH PRESSURE SENSOR GRID FOR USE WITH TENSIONER TOOL
2y 5m to grant Granted Jan 13, 2026
Patent 12490961
MEDICAL DEVICES AND RELATED METHODS
2y 5m to grant Granted Dec 09, 2025
Patent 12408863
SPINAL ALIGNMENT-ESTIMATING APPARATUS, SYSTEM FOR ESTIMATING SPINAL ALIGNMENT, METHOD FOR ESTIMATING SPINAL ALIGNMENT, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN PROGRAM FOR ESTIMATING SPINAL ALIGNMENT
2y 5m to grant Granted Sep 09, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
25%
Grant Probability
99%
With Interview (+88.2%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 20 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month