DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 3-4 and 8 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claim 3 recites “wherein the first end application comprises an active state and an inactive state, and wherein transmitting the firs input command comprises: determining whether the first end application is in the active state, and transmitting the first input command to the first end application if the first end application is in the active state”. This does not find support in the instant disclosure or in the parent application disclosure, as follows:
There is no explicit support in the instant disclosure or in the parent application disclosure for conditioning the transmission of the input command on whether an end application is active or inactive, or even determining whether the end application is active or inactive.
Instant par. 32 explains sending the input command to active end applications AND to inactive end applications, in other words, the transmission of the input command does not appear conditional on the active/inactive state of the end application.
Par. 32 also explains the time of execution of the input command at the end application, either at the time of receipt when the end application is active, or in a queue when the end application is inactive, in other words, the explanation of active/inactive end applications is with respect to the timing of execution of the input command and does not appear to require a determination of the state of the end application.
Claim 4 recites “foregoing transmitting the first input command to the first end application if the first end application is in the inactive state”. There is no support for foregoing transmission in the instant disclosure or in the parent application disclosure.
Claim 8 recites “wherein determining whether to transmit the first input command to the first end application and whether to transmit the second input command to the second end application comprises determining whether the first end application and the second end application are in an active state or a passive state”. This does not find support in the instant disclosure or in the parent application disclosure, as follows:
There is no passive state in the disclosure.
Furthermore, as explained above for claim 3, there is no support for conditioning the transmission of the input command on whether an end application is active or inactive, or even determining whether the end application is active or inactive.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 11-12 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sereshkeh et al. in US 2019/0107888 (hereinafter Sereshkeh).
Regarding claim 11, Sereshkeh disclose a method of calibrating neural signals (Sereshkeh’s par. 46: imagined speech) as electronic switches (Sereshkeh’s par. 52: control of applications) to permit an individual to control one or more electronic devices using the neural signals (Sereshkeh’s par. 52), the method comprising:
measuring first neural-related signals (Sereshkeh’s par. 48-50: measured brain signals on a first repetition of the word “no”) when the individual generates a task-irrelevant thought (Sereshkeh’s par. 50: word “no” which is irrelevant to turning on/off a switch for an assistive device or to call a caregiver, or to control a music player app of par. 148) to obtain a first sensed neural signal (Sereshkeh’s par. 48-50: EEG or fNIRS signal when mentally saying a first repetition of the word “no”);
measuring second neural-related signals (Sereshkeh’s par. 48-50: measured brain signals on a second repetition of the word “no”) when the individual generates the task-irrelevant thought (Sereshkeh’s par. 50: word “no”) to obtain a second sensed neural signal (Sereshkeh’s par. 48-50: EEG or fNIRS signal when mentally saying a second repetition of the word “no”)
transmitting the first sensed neural signal and the second sensed neural signal to a processing unit (Sereshkeh’s Figs. 1-2 and par. 48: classification includes collecting measured signals upon repetition of the word “no”, and par. 72, 77: the data collection unit sends the measurements to the processing device for classification);
analyzing the first sensed neural signal and the second sensed neural signal at the processing unit (Sereshkeh’s Figs. 1-2 and par. 48: classification includes using the repetition of the word “no”, and par. 72: classification using processing device) to extract one or more features of the first sensed neural signal and the second sensed neural signal (Sereshkeh’s par. 48: repetitions of the word “no” are classified by feature extraction per par. 79-82);
using the one or more features of the first sensed neural signal and the second sensed neural signal to train a machine learning model (Sereshkeh’s par. 48, 81-83: features of the signals from repetitions of the word “no” are used for machine learning classification), wherein the machine learning model is trained to identify the task-irrelevant thought based on the one or more features (Sereshkeh’s par. 82-83, 115: machine learned trained to identify word “no”);
compiling the one or more features of the first sensed neural signal and the second sensed neural signal to a database stored in electronic format (Sereshkeh’s Fig. 2 and par. 67, 76-77, 92: database, storage or memory storing information associated with classification, which includes the signals from repetitions of the word “no”) which allows the individual to control the one or more electronic devices (Sereshkeh’s par. 148: user controls assistive device, make a phone call or music player app) by producing the task-irrelevant thought (Sereshkeh’s par. 50: word “no” which is irrelevant to turning on/off a switch for an assistive device or to call a caregiver, or to control a music player app of par. 148) to cause electrical transmission of an input command to the one or more electronic devices (Sereshkeh’s par. 52, 148: turn-on assistive device, call caregiver, or control music player app, communication with electronic devices requires some type of electrical transmission).
Regarding claim 12, Sereshkeh disclose wherein the one or more features comprise voltage fluctuations, fluctuations in power in one or more frequency bands, a spectra power, a time-frequency domain, or a time-domain signal (Sereshkeh’s par. 204-206: DWT coefficients or FNIRS data).
Claims 1-2, 5-7 and 9-10 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Segal et al. in US 20150091791 (hereinafter Segal).
Regarding claim 1, Segal discloses a method (Segal’s par. 2) of controlling one or more electronic devices (Segal’s par. 56-58) using a brain-computer implant implanted in an individual (Segal’s par. 50, 63, 76), the method comprising:
measuring neural-related signals (Segal’s Fig. 1 and par. 77: brain wave) when the individual generates a task-irrelevant thought (Segal’s par. 56: directional intention which does not necessarily correlate with the movement, e.g. thinking “up” is task-irrelevant to turning a device on) to obtain a sensed neural signal (Segal’s Figs. 1, 7 and par. 56: directional intention), wherein the sensed neural signal (Segal’s par. 56: e.g. directional intention “up”) is associated with a first input command (Segal’s par. 56, 58: e.g. turning a device on, such as a television) of a first end application (Segal’s par. 58: television) and is further associated with a second input command (Segal’s par. 56, 58: e.g. turning a device on, such as a media player) of a second end application (Segal’s par. 58: media player);
transmitting the sensed neural signal to a processing unit (Segal’s Fig. 1 and par. 77);
determining, at the processing unit, whether the sensed neural signal (Segal’s par. 56: e.g. directional intention “up”) is associated with one or more of the first input command and the second input command (Segal’s par. 64-66: directional pattern is compared against collection to determine action assigned to the line pattern);
receiving a selection from the individual (Segal’s par. 61: user’s intended menu selection) to transmit one or more of the first input command and the second input command (Segal’s par. 58, 60, 62: thought-to-action to launch one or more applications or thought-to-control component, including output to third party devices);
in accordance with a selection of the first input command (Segal’s par. 58-60: user selects a plaque for turning on a home theater system which includes turning on a television), transmitting the first input command to the first end application (Segal’s Fig. 1: command to device 19 or 17) to control a first electronic device associated with the first end application (Segal’s par. 58: operating a television to be turned on by the single line patter “up’ of par. 56); and
in accordance with a selection of the second input command (Segal’s par. 58-60: user selects a plaque for turning on a home theater system which includes turning on a media player), transmitting the second input command to a second end application (Segal’s Fig. 1: command to device 19 or 17) to control a second electronic device associated with the second end application (Segal’s par. 58: operating a media player to be turned on by the single line patter “up’ of par. 56).
Regarding claim 2, Segal disclose wherein receiving the selection from the individual comprises:
presenting a user interface to the individual (Segal’s Fig. 8 and par. 61: cognition user interface); and
receiving the selection from the individual on the user interface (Segal’s par. 61: user’s intended menu selection).
Regarding claim 5, Segal discloses a method (Segal’s par. 2) of controlling one or more electronic devices (Segal’s par. 56-58) using a brain-computer implant implanted in an individual (Segal’s par. 50, 63, 76), the method comprising:
measuring neural-related signals (Segal’s Fig. 1 and par. 77: brain wave) when the individual generates a task-irrelevant thought (Segal’s par. 56: directional intention which does not necessarily correlate with the movement, e.g. thinking “up” is task-irrelevant to turning a device on) to obtain a sensed neural signal (Segal’s Figs. 1, 7 and par. 56: directional intention), wherein the sensed neural signal (Segal’s par. 56: e.g. directional intention “up”) is associated with a first input command (Segal’s par. 56, 58: e.g. turning a device on, such as a television) of a first end application (Segal’s par. 58:television) and is further associated with a second input command (Segal’s par. 56, 58: e.g. turning a device on, such as a media player) of a second end application (Segal’s par. 58: media player);
transmitting the sensed neural signal to a processing unit (Segal’s Fig. 1 and par. 77);
determining, at the processing unit (Segal’s Fig. 1), whether to transmit the first input command to the first end application (Segal’s par. 66: whether there is a match between the line pattern and an action, to send the command [par. 62], such as the plaque for turning on a home theater system which includes turning on a television [par. 58]) and whether to transmit the second input command to the second end application ((Segal’s par. 66: whether there is a match between the line pattern and an action, to send the command [par. 62], such as the plaque for turning on a home theater system which includes turning on a media player [par. 58]);
in accordance with a determination to transmit the first input command to the first end application (Segal’s Fig. 1, par. 58, 62, 66: when there is a match between the line pattern and the command to turn on a television, thus send command to perform execution), transmitting the first input command to the first end application (Segal’s Fig. 1 and par. 62: command to device 19 or 17) to control a first electronic device associated with the first end application (Segal’s par. 58: operating a television to be turned on by the single line patter “up’ of par. 56); and
in accordance with a determination to transmit the second input command to the second end application (Segal’s Fig. 1, par. 58, 62, 66: when there is a match between the line pattern and the command to turn on a media player, thus send command to perform execution), transmitting the second input command to a second end application (Segal’s Fig. 1 and par. 62: command to device 19 or 17) to control a second electronic device associated with the second end application (Segal’s par. 58: operating a media player to be turned on by the single line patter “up’ of par. 56).
Regarding claim 6, Segal discloses wherein determining whether to transmit the first input command to the first end application and whether to transmit the second input command to the second end application (Segal’s par. 66: whether there is a match between the line pattern and an action, to send the command [par. 62], such as the plaque for turning on a home theater system which includes turning on a television and turning on a media player [par. 58]) comprises receiving a selection from the individual (Segal’s par. 61: user’s intended menu selection) to transmit one or more of the first input command and the second input command (Segal’s par. 58, 60, 62: thought-to-action to launch one or more applications or thought-to-control component, including output to third party devices).
Regarding claim 7, Segal disclose wherein receiving the selection from the individual comprises receiving a selection from the individual on a user interface (Segal’s Fig. 8 and par. 61: user’s intended menu selection on cognition user interface).
Regarding claim 9, Segal disclose further comprising: compiling the task-irrelevant thought (Segal’s par. 56: directional intention, e.g. “up”), the sensed neural signal (Segal’s Figs. 1, 7 and par. 56: directional intention), the first input command (Segal’s par. 56, 58: e.g. turning a device on, such as a television), and the second input command (Segal’s par. 56, 58: e.g. turning a device on, such as a media player) to a database stored in electronic format (Segal’s par. 64-65: database of runes includes directional pattern [sensed neural signal] and directional intention [thought], and par. 77: database includes collection of line patterns [sensed neural signal] and specific actions [first or second input command]).
Regarding claim 10, Segal disclose wherein the database allows a third end user application (Segal’s par. 60: third party devices) to assign the task-irrelevant thought (Segal’s par. 60: thought in thought-to-control component) to a third input command (Segal’s par. 60: thought-controlled operation of third party device) of a third end application (Segal’s par. 60: third party device) to allow the individual to control the third end application (Segal’s par. 60: thought [of individual] controls the third party device in a thought-to-control component) by producing the task-irrelevant thought (Segal’s par. 60: thought in thought-to-control component) to cause electrical transmission (Segal’s par. 17) of the third input command (Segal’s par. 60: thought-controlled operation of third party device, e.g. turn on a device of par. 56) to the third end application (Segal’s par. 60: third party device).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3-4 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Segal in view of Durland et al. in US 2003/0021405 (hereinafter Durland).
Regarding claim 3, Segal fails to disclose determining whether the first end application is in the active state, and transmitting the first input command to the first end application if the first end application is in the active state.
However, in the related field of transmitting information through a network, Durland discloses determining whether an application is in active state (Durland’s par. 41: content relay system is informed whether a recipient device is in working order [active]) in order to transmit content to the application (Durland’s Fig. 3 step 276).
Therefore, it would have been obvious to one of ordinary skill in the art, that Segal’s method includes a determination of whether the end application is in active or inactive state prior to and in order to transmit the command (such as Durland’s par. 41 whether a recipient device is in working order for content transmission of Fig. 3 step 276), in order to obtain the benefit of skipping transmission of content to inactive recipient devices (Durland’s par. 41).
By doing such combination, Segal in view of Durland disclose:
wherein the first end application (Segal’s par. 58: television equivalent to a recipient device of Durland’s par. 41) comprises an active state and an inactive state (Durland’s par. 41: working order or inactive), and wherein transmitting the first input command (Segal’s Fig. 1: command to device 19 or 17, equivalent to transmission of content in step 276 of Durland’s Fig. 3) comprises:
determining whether the first end application is in the active state (Durland’s par. 41: whether the recipient device is in working order, which upon combination is the television of Segal’s par. 58), and transmitting the first input command to the first end application if the first end application is in the active state (Durland’s Fig. 3 step 276 and par. 41: content is sent when recipient device is in working order).
Regarding claim 4, Segal in view of Durland disclose further comprising foregoing transmitting the first input command (Durland’s par. 41: skipping transmission of content, where the content is the command to turn-on a television of Segal’s par. 58) to the first end application (Segal’s par. 58: television equivalent to a recipient device of Durland’s par. 41) if the first end application is in the inactive state (Durland’s par. 41: inactive recipient device).
Regarding claim 8, Segal fails to disclose wherein determining whether to transmit the first input command to the first end application and whether to transmit the second input command to the second end application comprises determining whether the first end application and the second end application are in an active state or a passive state.
However, in the related field of transmitting information through a network, Durland discloses determining whether to transmit content to an application (Durland’s Fig. 3 step 276) based on whether an application is in active or inactive state (Durland’s par. 41: content relay system is informed whether a recipient device is in working order [active]).
Therefore, it would have been obvious to one of ordinary skill in the art, that Segal’s method includes a determination of whether the end application is in active or inactive state prior to and in order to transmit the command (such as Durland’s par. 41 whether a recipient device is in working order for content transmission of Fig. 3 step 276), in order to obtain the benefit of skipping transmission of content to inactive recipient devices (Durland’s par. 41).
By doing such combination, Segal in view of Durland disclose:
wherein determining whether to transmit the first input command to the first end application (Segal’s Fig. 1, par. 58, 62, 66: when there is a match between the line pattern and the command to turn on a television, thus send command to perform execution, this is equivalent to step 276 of Durland’s Fig. 3) and whether to transmit the second input command to the second end application (Segal’s Fig. 1, par. 58, 62, 66: when there is a match between the line pattern and the command to turn on a media player, thus send command to perform execution, this is equivalent to step 276 of Durland’s Fig. 3) comprises determining whether the first end application and the second end application are in an active state or a passive state (Durland’s par. 41: whether the recipient device is in working order or inactive, which upon combination is the television or the medial player of Segal’s par. 58).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Liliana Cerullo whose telephone number is (571)270-5882. The examiner can normally be reached 8AM to 3PM MT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr Awad can be reached at 571-272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LILIANA CERULLO/Primary Examiner, Art Unit 2621