DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 28-29, 31-38, 40-43 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Torch (US 20070273611), cited in IDS.
Regarding claim 28, Torch discloses A system for interacting with a patient, comprising: a communication module 130 configured to operate in at least three communication modes (Figs. 1, 2, 8, section 0048, the detection device may be coupled to a processing box that converts the detected eye and/or eyelid movement into a stream of data, an understandable message, and/or into other information, which may be communicated, for example, using a video display, to a medical care provider) that comprise: unidirectional mode, in which the communication module outputs communication to the patient that does not require his/her response (Section 0072, the detection device may be used to detect impending drowsiness or "micro-sleeps" i.e., sleep intrusions into wakefulness lasting a few seconds of a user, with the processing box triggering a warning to alert the user), responsive mode, in which the communication module outputs communication to the patient that requires his/her response, the responsive mode comprises questions to or requests from the patient (section 0079, The detector device may produce a stimulus, e.g. activating a light or speaker, and monitor the user's eyelid movement in anticipation of receiving a response, e.g., a specific sequence of blinks, acknowledging the stimulus within a predetermined time), and open communication mode, in which the communication module allows the patient to proactively initiate communication with the system; and a processing circuitry that comprises an input module configured to receive input data indicative of the patient sedation state (Section 0083, A user wearing the detection device 30 may intentionally blink in a predetermined pattern, for example, in Morse code or other blinked code, to communicate an understandable message to people or equipment e.g., to announce an emergency), the processing circuitry is configured to: determine the sedation state of the patient based on the input data (section 0075, The detection device and system may also be used in a medical diagnostic, therapeutic, research, or professional setting to monitor the wakefulness, sleep patterns, and/or sympathetic and parasympathetic effects of stressful conditions or alerting drugs (e.g. caffeine, nicotine, dextro-amphetamine, methylphenidate, modafanil), sedating drugs (e.g. benzodiazapines, Ambien), or circadian rhythm altering effects of light and darkness or melatonin, which may affect blink rate, blink velocity, blink duration, or PERCLOS of a patient or vehicle operator); and trigger the communication module to operate in a selected communication mode in response to the determined sedation state (section 0117, These devices may provide feedback to the user, e.g., to alert and/or wake the user, when a predetermined condition is detected, e.g., a state of drowsiness or lack of consciousness. The feedback devices may be coupled to the processor, which may control their activation); wherein the determination of the sedation state of the patient based on the input data comprises classifying the sedation state of the patient into one of at least three sedation levels, each level triggers one or more level-specific communication modes (section 0079, activating a light or speaker, and monitor the user's eyelid movement in anticipation of receiving a response, e.g., a specific sequence of blinks, acknowledging the stimulus within a predetermined time, section 0083, A user wearing the detection device 30 may intentionally blink in a predetermined pattern, section 0117, a state of drowsiness or lack of consciousness); wherein said classifying comprises scoring the sedation state of the patient, the score defines the sedation level, wherein three ranges of scores define the sedation levels, each range defines a different level; and wherein the processing circuitry is configured to: trigger the unidirectional mode upon determining a non-responsive sedation state of the patient (Section 0072, the detection device may be used to detect impending drowsiness or "micro-sleeps" i.e., sleep intrusions into wakefulness lasting a few seconds of a user, with the processing box triggering a warning to alert the user); trigger the responsive mode and/or the unidirectional mode upon determining a low-level responsive sedation state of the patient (section 0079, activating a light or speaker, and monitor the user's eyelid movement in anticipation of receiving a response, e.g., a specific sequence of blinks, acknowledging the stimulus within a predetermined time); trigger the open communication mode and/or the responsive mode and/or the unidirectional mode upon determining a high-level responsive sedation state of the patient (sections 0079, 0083, activating a light or speaker, and monitor the user's eyelid movement in anticipation of receiving a response, e.g., a specific sequence of blinks, acknowledging the stimulus within a predetermined time, A user wearing the detection device 30 may intentionally blink in a predetermined pattern, for example, in Morse code or other blinked code, to communicate an understandable message to people or equipment e.g., to announce an emergency).
Regarding claim 29, Torch discloses the communication module 130 is configured to perform and/or allow the patient audible communication, video-based communication, eye- based communication, touchscreen-based communication, tactile communication, EEG-based communication, EOG-based communication, automatic lip-reading communication, head gestures-based communication, or any combination thereof (section 0079, 0083 activating a light or speaker, and monitor the user's eyelid movement in anticipation of receiving a response, e.g., a specific sequence of blinks, acknowledging the stimulus within a predetermined time, A user wearing the detection device 30 may intentionally blink in a predetermined pattern, for example, in Morse code or other blinked code, to communicate an understandable message to people or equipment e.g., to announce an emergency).
Regarding claim 31, Torch discloses the sedation state of the patient is the delirium state of the patient (section 0171, the processor may analyze data from the endocameras and exocameras to correlate movement of the eye(s) relative to images on the display to study a variety of oculometric parameters, such as slow rolling eye movement, poor eye fixation and/or tracking, wandering gaze, increased eye blinking, hypnotic staring, prolonged eyelid droops or blinking, slow velocity eyelid opening and closing, startled eyelid opening velocities, long-term pupillary constrictive changes, unstable pupil diameters, obscured visual-evoked pupil reactions, and/or other parameters discussed elsewhere herein. These procedures may be used to study an individual's responsive faced with various environmental, alcohol or drug-induced, and/or other conditions).
Regarding claim 32, Torch discloses input data comprises recorded communication of the patient with the communication module (Section 0060, the amplifier and transmitter 152 may communicate via telephone communication lines, satellites and the like, to transmit the stream of data to a remote location miles away from the system, where the data can be monitored, analyzed in real time, or stored (e.g., as in a truck or aircraft "black box" recorder) for future or retrospective analysis).
Regarding claim 33, Torch discloses input data comprises eye image data indicative of recorded images of an eye of the patient (section 0111, system also includes one or more cameras oriented generally towards one or both of the user's eyes. Each camera may include a fiber optic bundle 832 including a first end mounted to or adjacent the bridge piece (or elsewhere on the frame, e.g., at a location that minimizes interferences with the user's vision), and a second end that is coupled to a detector, e.g., a CCD or CMOS sensor, which may convert images into digital video signals).
Regarding claim 34, Torch discloses a camera unit configured for recording images of an eye of the patient and generating eye image data based thereon, wherein said input data comprises said image data (section 0111, system also includes one or more cameras oriented generally towards one or both of the user's eyes. Each camera may include a fiber optic bundle 832 including a first end mounted to or adjacent the bridge piece (or elsewhere on the frame, e.g., at a location that minimizes interferences with the user's vision), and a second end that is coupled to a detector, e.g., a CCD or CMOS sensor, which may convert images into digital video signals).
Regarding claim 35, Torch discloses input data comprises EEG data indicative of recorded EEG signals of the patient (Section 0063, 0106, stream of data may be displayed along with other physiological data, such as skin conductance, body temperature, cardiovascular data (e.g. heart rate, blood pressure), respiratory data (e.g. respiration rate, blood oxygen and carbon dioxide levels), electromyographic (EMG) and/or actigraphic data (i.e. body movement, position), and/or other sleep polysomnographic (PSG) or electroencephalographic (EEG) variables. The transmitted stream of data may be processed alone or along with additional data, such as other vehicle sensor information, and/or human factors e.g. EKG, EEG, EOG, pulse, blood pressure, respiratory rate, oximetry, actigraphy, head position, voice analysis, body temperature, skin conductance, self-assessment measures and performance vigilance responses, observation by others through a fixed non-wearable dash-board or visor-mounted camera system, etc).
Regarding claim 36, Torch discloses an EEG unit configured for recording EEG signals of the patient and generate EEG data based thereon, said input data comprises said EEG data (Section 0063, 0106, stream of data may be displayed along with other physiological data, such as skin conductance, body temperature, cardiovascular data (e.g. heart rate, blood pressure), respiratory data (e.g. respiration rate, blood oxygen and carbon dioxide levels), electromyographic (EMG) and/or actigraphic data (i.e. body movement, position), and/or other sleep polysomnographic (PSG) or electroencephalographic (EEG) variables. The transmitted stream of data may be processed alone or along with additional data, such as other vehicle sensor information, and/or human factors e.g. EKG, EEG, EOG, pulse, blood pressure, respiratory rate, oximetry, actigraphy, head position, voice analysis, body temperature, skin conductance, self-assessment measures and performance vigilance responses, observation by others through a fixed non-wearable dash-board or visor-mounted camera system, etc.).
Regarding claim 37, Torch discloses receiving and processing input data indicative of the patient sedation state (section 0079, activating a light or speaker, and monitor the user's eyelid movement in anticipation of receiving a response, e.g., a specific sequence of blinks, acknowledging the stimulus within a predetermined time, section 0083, A user wearing the detection device 30 may intentionally blink in a predetermined pattern, section 0117, a state of drowsiness or lack of consciousness); determining based on the input data the sedation state of the patient (section 0083, A user wearing the detection device 30 may intentionally blink in a predetermined pattern, section 0117, a state of drowsiness or lack of consciousness); and in response to the determined sedation state of the patient, outputting by a communication module a selected communication in a communication mode selected from at least one of three communication modes (Figs. 1, 2, 8, section 0048, the detection device may be coupled to a processing box that converts the detected eye and/or eyelid movement into a stream of data, an understandable message, and/or into other information, which may be communicated, for example, using a video display, to a medical care provider) that comprises: (1) unidirectional mode, in which the communication module outputs communication to the patient that does not require his/her response (Section 0072, the detection device may be used to detect impending drowsiness or "micro-sleeps" i.e., sleep intrusions into wakefulness lasting a few seconds of a user, with the processing box triggering a warning to alert the user), (2) responsive mode, in which the communication module outputs communication to the patient that requires his/her response, the responsive mode comprises questions to or requests from the patient (section 0079, The detector device may produce a stimulus, e.g. activating a light or speaker, and monitor the user's eyelid movement in anticipation of receiving a response, e.g., a specific sequence of blinks, acknowledging the stimulus within a predetermined time), and (3) open communication mode, in which the communication module allows the patient to proactively initiate communication with the system (Section 0083, A user wearing the detection device 30 may intentionally blink in a predetermined pattern, for example, in Morse code or other blinked code, to communicate an understandable message to people or equipment e.g., to announce an emergency); wherein the determination of the sedation state of the patient based on the input data comprises classifying the sedation state of the patient into one of at least three sedation levels, each level triggers one or more level-specific communication modes (section 0079, activating a light or speaker, and monitor the user's eyelid movement in anticipation of receiving a response, e.g., a specific sequence of blinks, acknowledging the stimulus within a predetermined time, section 0083, A user wearing the detection device 30 may intentionally blink in a predetermined pattern, section 0117, a state of drowsiness or lack of consciousness); wherein said classifying comprises scoring the sedation state of the patient, the score defines the sedation level, wherein three ranges of scores define the sedation levels, each range defines a different level; wherein said outputting comprises: selecting the unidirectional mode upon determining a non-responsive sedation state of the patient (Section 0072, the detection device may be used to detect impending drowsiness or "micro-sleeps" i.e., sleep intrusions into wakefulness lasting a few seconds of a user, with the processing box triggering a warning to alert the user); selecting the responsive mode and/or the unidirectional mode upon determining a low-level responsive sedation state of the patient (section 0079, activating a light or speaker, and monitor the user's eyelid movement in anticipation of receiving a response, e.g., a specific sequence of blinks, acknowledging the stimulus within a predetermined time); selecting the open communication mode and/or the responsive mode and/or the unidirectional mode upon determining a high-level responsive sedation state of the patient (sections 0079, 0083, activating a light or speaker, and monitor the user's eyelid movement in anticipation of receiving a response, e.g., a specific sequence of blinks, acknowledging the stimulus within a predetermined time, A user wearing the detection device 30 may intentionally blink in a predetermined pattern, for example, in Morse code or other blinked code, to communicate an understandable message to people or equipment e.g., to announce an emergency).
Regarding claim 38, Torch discloses selected communication is any one of audible communication, video-based communication, eye-based communication, touchscreen- based communication, tactile communication, EEG-based communication, EOG-based communication, automatic lip-reading communication, head gestures-based communication, or any combination thereof (sections 0079, 0083activating a light or speaker, and monitor the user's eyelid movement in anticipation of receiving a response, e.g., a specific sequence of blinks, acknowledging the stimulus within a predetermined time, A user wearing the detection device 30 may intentionally blink in a predetermined pattern, for example, in Morse code or other blinked code, to communicate an understandable message to people or equipment e.g., to announce an emergency).
Regarding claim 40, Torch discloses the sedation state of the patient is the delirium state of the patient (section 0171, the processor may analyze data from the endocameras and exocameras to correlate movement of the eye(s) relative to images on the display to study a variety of oculometric parameters, such as slow rolling eye movement, poor eye fixation and/or tracking, wandering gaze, increased eye blinking, hypnotic staring, prolonged eyelid droops or blinking, slow velocity eyelid opening and closing, startled eyelid opening velocities, long-term pupillary constrictive changes, unstable pupil diameters, obscured visual-evoked pupil reactions, and/or other parameters discussed elsewhere herein. These procedures may be used to study an individual's responsive faced with various environmental, alcohol or drug-induced, and/or other conditions).
Regarding claim 41, Torch discloses input data comprises recorded communication of the patient with the communication module (Section 0060, the amplifier and transmitter 152 may communicate via telephone communication lines, satellites and the like, to transmit the stream of data to a remote location miles away from the system, where the data can be monitored, analyzed in real time, or stored (e.g., as in a truck or aircraft "black box" recorder) for future or retrospective analysis).
Regarding claim 42, Torch discloses input data comprises eye image data indicative of recorded images of an eye of the patient (section 0111, system also includes one or more cameras oriented generally towards one or both of the user's eyes. Each camera may include a fiber optic bundle 832 including a first end mounted to or adjacent the bridge piece (or elsewhere on the frame, e.g., at a location that minimizes interferences with the user's vision), and a second end that is coupled to a detector, e.g., a CCD or CMOS sensor, which may convert images into digital video signals).
Regarding claim 43, Torch discloses input data comprises EEG data indicative of recorded EEG signals of the patient (Section 0063, 0106, stream of data may be displayed along with other physiological data, such as skin conductance, body temperature, cardiovascular data (e.g. heart rate, blood pressure), respiratory data (e.g. respiration rate, blood oxygen and carbon dioxide levels), electromyographic (EMG) and/or actigraphic data (i.e. body movement, position), and/or other sleep polysomnographic (PSG) or electroencephalographic (EEG) variables. The transmitted stream of data may be processed alone or along with additional data, such as other vehicle sensor information, and/or human factors e.g. EKG, EEG, EOG, pulse, blood pressure, respiratory rate, oximetry, actigraphy, head position, voice analysis, body temperature, skin conductance, self-assessment measures and performance vigilance responses, observation by others through a fixed non-wearable dash-board or visor-mounted camera system, etc).
Claim Objections
Claim 30, 39 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. No prior art search came up disclosing the use of Richmond Agitation- Sedation Scale (RASS) score when combined with the other limitations of claim 30 and 39 that includes the limitations of independent claims 28 and 37.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JON ERIC C MORALES whose telephone number is (571)272-3107. The examiner can normally be reached Monday-Friday 830AM-530PM CST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Hamaoui can be reached at 571-270-5625. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JON ERIC C MORALES/Primary Examiner, Art Unit 3796