Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the visual stimuli, task, and time series of eye movements must be shown or the feature(s) canceled from the claim(s). No new matter should be entered.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-27 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Within claims 1, 26, and 27, the limitations “wherein the first time series of eye movements and the task are taken at the same time; … comparing, by the computing device, the eye movements from the first time series and the task” are unclear as to how a “task is taken” and how eye movements may be compared with a task. By virtue of dependency, claims 2-25 are also rejected.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-7, 9-11, 17, and 26-27 is/are rejected under 35 U.S.C. 102(a)(1) and 102 (a)(2) as being anticipated by Publicover (US 20150338915 A1).
Regarding claim 1, Publicover teaches a method of discovering relationships between eye movements and cognitive and/or emotional responses of a user ([0099] tracked eye movements and geometry can be used to discern the physiological and/or emotional states of an individual in a continuous fashion), the method comprising the steps of:
engaging the user in at least one task, each task comprising a visual stimuli via an electronic display ([0267] Actions may be taken by the interaction model and game when such graphics are viewed, not viewed, or viewed in sequence. Points may be awarded or game play altered based upon defined gaze activity; [0293] The system may comprise a display device that is viewable by the user.)
and each task configured to elicit a predicted specific cognitive and/or emotional response from the user ([0099] the degree of pupil dilation can be used to discern emotional states such as fear, interest, or cognitive load. Anti-saccadic movements can be an indication of viewing something distasteful. When combined with information about the real or virtual objects being viewed by a user, an indication of the classes of objects that, for example, elicit fright, attraction, or interest can be discerned. Such information can, for example, be used to tailor subsequent displays of information.);
varying the visual stimuli to elicit the predicted specific cognitive and/or emotional response from the user ([0099] the degree of pupil dilation can be used to discern emotional states such as fear, interest, or cognitive load. Anti-saccadic movements can be an indication of viewing something distasteful. When combined with information about the real or virtual objects being viewed by a user, an indication of the classes of objects that, for example, elicit fright, attraction, or interest can be discerned. Such information can, for example, be used to tailor subsequent displays of information.);
providing a camera filming at least one eye of the user ([0019] one or more cameras or at least one sensor to monitor changes in the reflection of light … project reference light onto one or both eyes);
recording a first time series of eye movements by the user with the camera ([0221] Using eye position data determined from a series of eye images, algorithmic “filters” can be constructed to identify and distinguish, in real-time, the presence of a saccadic or smooth pursuit eye movements; [0583] The use of deep learning approaches to eye signals can be classified as a “time series” data analysis. In other words, eye signals can be recognized from a series of (X, Y) positions of one or both eyes measured over time.);
recording each task corresponding to the first time series of eye movements by the user; wherein the first time series of eye movements and the task are taken at the same time; sending the first time series of eye movements and the task to a computing device; comparing, by the computing device, the eye movements from the first time series and the task ([0477] FIG. 16 shows an overall sequence of algorithmic steps used to detect eye signals that may lead up to the performance of an action. Images of an eye are acquired and analyzed at 1630 to determine gaze pathways. If an eye cannot be found because it is covered by an eyelid or otherwise obscured at 1631, timing registers are simply updated and analysis proceeds to the next camera frame. Filters at 1632 are applied to all series of eye positions to determine whether primarily saccadic or smooth pursuit [or vergence or vestibulo-ocular eye movements are present. If matches to these forms of voluntary eye movement are found, then gaze pathways are further compared with screen positions at 1633 to determine if regions corresponding to interactables or real objects are being viewed. Gaze patterns are then further analyzed to determine if they generally correspond to activation interactables displayed on the screen at 1634. If a match is found, then actions corresponding to the selected interactable[s] are performed at 1635.);
and identifying, by the computing device, at least one relationship between eye movements that correlate to the actual specific cognitive and/or emotional response ([0618] The output of the process can be a null indicating “no action” or a set of one or more intents and/or conditions 2739 of the device wearer. Intents can, for example, include the activation of an intended action [i.e., a binary classification]. Simultaneously determined conditions can include a user state [i.e., generally classified over a continuous range] such as cognitive load or a degree of fatigue; Fig. 27).
Regarding claim 2, Publicover teaches the method of claim 1, wherein the camera is physically attached to the user ([0020] one or more illumination sources, cameras, or other sensors disposed onto, or embedded within one or more portable devices [phone, tablet, web camera, laptop, camera, video camera, nomadic device, electronic accessory components etc.] or wearable devices [e.g., glasses, watch, hat, jewelry, clothing, personal accessories, etc.])..
Regarding claim 3, Publicover teaches the method of claim 2, wherein the camera is a pair of eyeglasses worn by the user ([0020] one or more illumination sources, cameras, or other sensors disposed onto, or embedded within one or more portable devices [phone, tablet, web camera, laptop, camera, video camera, nomadic device, electronic accessory components etc.] or wearable devices [e.g., glasses, watch, hat, jewelry, clothing, personal accessories, etc.]).
Regarding claim 4, Publicover teaches the method of claim 1, wherein the camera is not physically attached to the user ([0020] one or more illumination sources, cameras, or other sensors disposed onto, or embedded within one or more portable devices [phone, tablet, web camera, laptop, camera, video camera, nomadic device, electronic accessory components etc.]).
Regarding claim 5, Publicover teaches the method of claim 4, wherein the electronic display and the camera are part of a smartphone or a tablet ([0020] one or more illumination sources, cameras, or other sensors disposed onto, or embedded within one or more portable devices [phone, tablet, web camera, laptop, camera, video camera, nomadic device, electronic accessory components etc]; display 530).
Regarding claim 6, Publicover teaches the method of claim 4, wherein the computing device comprises a smartphone, a tablet, a laptop computer or a desktop computer, wherein the computing device comprises the electronic display and the camera ([0020] one or more illumination sources, cameras, or other sensors disposed onto, or embedded within one or more portable devices [phone, tablet, web camera, laptop, camera, video camera, nomadic device, electronic accessory components etc.]; display 530).
Regarding claim 7, Publicover teaches the method of claim 1, wherein the eye movements comprise:
X gaze location, Y gaze location ([0554] FIG. 20 illustrates the time at which a new object is introduced via a step change in opacity during a saccade. The upper traces represent measured X [i.e., horizontal] 2030 and Y [i.e., vertical] 2031 gaze locations. When the initiation of a saccadic eye movement is detected at 2032, opacity is changed to a desired level during the time of the saccade at 2033),
saccade rate, saccade peak velocity, fixation duration, fixation entropy, gaze deviation of polar angle, gaze deviation of eccentricity, re-fixations, smooth pursuits and/or scan path ([0054] FIG. 3 is a flowchart illustrating the classification of saccades, micro-saccades, smooth pursuit eye movements, and fixations; [0227] If the eye velocity is greater than a minimum threshold for saccadic movements [typically 30° per second] at 334, then the system signifies that some form of saccadic movement has occurred at 335. If the saccadic movement occurred over a distance that is generally within the foveal view [i.e., within approximately 1° to 3°] at 335, then the eye movement is recorded as a micro-saccade at 336. If, on the other hand, the angular distance traveled by the eye is greater than this range at 335, the event is registered as a saccade at 337; [0228].).
Regarding claim 9, Publicover teaches the method of claim 1, wherein the eye movements comprise a change in blinking which includes blink rate, blink duration, blink latency, partial blinks, blink entropy and/or squinting ([0015] Added to this, viewing of the eye is often obscured during normal function by eyelids and lashes. Furthermore, eye blinks in which the eyelid completely blocks viewing of the position of the eye must occur periodically for sustained function in order to maintain lubrication and the aqueous environment of the surface of the eye. Blink durations [normally lasting from 0.3 to 0.4 seconds] and velocities can be affected by fatigue, attentions, injury, medications, drugs, alcohol, and disease; [0238] prolonged eye blinks to discern intern… Blinks take even longer periods of time, requiring a complex series of muscle contractions. The minimum time for a blink is about 0.3 to 0.4 seconds; [0592] For example, movement of the eye lids and/or eye lashes can be used to anticipate that a blink is about to occur. As a blink is initiated, the system can anticipate that the user will be functionally blind for the duration of a blink).
Regarding claim 10, Publicover teaches the method of claim 1, wherein the task comprises a task configured to deliver a large, unexpected reward or penalty ([0267] Actions may be taken by the interaction model and game when such graphics are viewed, not viewed, or viewed in sequence. Points may be awarded or game play altered based upon defined gaze activity.), wherein the predicted specific cognitive and/or emotional response comprises surprise ([0214] eye signals may be combined with other input modalities to control device actions. These modalities may include head movements such as shakes, tilts, or leans [e.g., indicating “yes,” “no,” interest, surprise, reflection]).
Regarding claim 11, Publicover teaches the method of claim 1, wherein the task comprises a task configured to alternate between highly focused attention or carefree distributed attention ([0380] Switching or weighting different algorithms can be based upon environmental conditions (e.g. lighting) or even physiological factors such as cognitive load; [0536] steps can be taken to control whether objects that are introduced or modified within a display are presented in a fashion to either: 1] attract attention or 2] avoid distraction.), wherein the predicted specific cognitive and/or emotional response comprises vigilance ([0245] By recognizing stereotypic eye movements during the reading process, rates of reading, any text that might have been skipped and/or conversely, text that attracted prolonged attention can also be identified. The number of regressions, sub-vocalizations [using additional sensors], saccade distances, and fixation times can be used as indicators of both interest in and complexity of materials).
Regarding claim 17, Publicover teaches the method of claim 1, wherein the task comprises a computer game utilizing a computer mouse, joystick, keyboard and/or touch screen ([0214] In other embodiments, eye signals may be combined with other input modalities to control device actions. These modalities may include … traditional computer input devices such as keyboards, mice, and touch screens; [0267] In a gaming environment).
Claim(s) 26-27 is/are rejected under 35 U.S.C. 102 (a)(2) as being anticipated by Rau (US 20160022193 A1).
Regarding claim 26, Rau teaches a method of discovering relationships between eye movements and cognitive and/or emotional responses of a user ([0004] systems and methods for real-time measurement of objective, autonomic physiological parameters that allow for monitoring of mental health illnesses and emotional health changes), the method comprising the steps of:
engaging the user in at least one task, each task comprising a visual stimuli via an electronic display and each task configured to elicit a predicted specific cognitive and/or emotional response from the user; varying the visual stimuli to elicit the predicted specific cognitive and/or emotional response from the user ([0135] Structured Stimuli are defined hereafter as the questions to the patient in physicians' patient examination to evoke patient responses as part of differential diagnostic procedure to arrive in patient's illness prognosis. These stimuli can be questions, playing video games, or responding with verbal and task oriented computations for financial, cognition, physical dexterity measuring inputs; [0126] compare with anticipated or expected responses for standardized clinical tests based on a patient's illness and combine the information of significant biometric changes and thresholds provided by this system through the real-time data analytics.);
providing a camera filming at least one eye of the user; recording a first time series of eye movements by the user with the camera; recording each task corresponding to the first time series of eye movements by the user; wherein the first time series of eye movements and the task are taken at the same time ([0143] These systems have pairs of video [visible spectrum] and thermal infrared [IR] video camera subsystems to capture facial features, pupil size changes, eyelid flutter rates, perspiration, and facial blood flow changes; [0067] tracking pupil movements; [0135] FIG. 1 illustrates stimuli interactions with relevant main components of brain structure, nervous system responses and biometric sensors' data capture flow chart that provides an overview of the system and a method for collecting objective measurements of different biometric parameters in real time);
sending the first time series of eye movements and the task to a computing device; comparing, by the computing device, the eye movements from the first time series and the task; and identifying, by the computing device, at least one relationship between eye movements that correlate to a diagnosis of a mental health condition ([0136] The patient's various autonomic physiological parameters [APPs] will respond and react to the induced stimuli. By utilizing biometric devices 1018, these APPs can be detected with far more precision and accuracy than is feasible through a clinician's simultaneous visual observations. Some of the APPs captured by the biometric devices include … eye movements, facial changes including color & texture changes, posture changes, muscle movements [voluntary and involuntary], and speech and tonal changes, as applicable. These biometric parameters can capture changes and severity of individual patient's feelings, emotions 1014 and theft innate resiliencies, coping skills, [healthy] behavior, function and responses 1012. From the biometric devices, the data sets 1020 created will be provided in near real time to the clinician that will be compared to a master database of the patient's previous sessions and other patients with comparable illness conditions; data analytics 7004; [0131] The present invention can be used to diagnose and treat many mental health disorders and illnesses, such as, but not limited to: anxiety [e.g., generalized anxiety disorder, panic disorders, phobias, obsessive-compulsive disorders [OCD], post-traumatic stress disorder [PTSD]; attention deficient disorder [ADD], and attention deficit hyperactivity disorder [ADHD]]; depressive disorder [e.g., dysthymia, depression in the elderly, postpartum depression]; stress or mild depression).
Regarding claim 27, Rau teaches a method of discovering relationships between eye movements and cognitive and/or emotional responses of a user ([0004] systems and methods for real-time measurement of objective, autonomic physiological parameters that allow for monitoring of mental health illnesses and emotional health changes), the method comprising the steps of:
engaging the user in at least one task, each task comprising a visual stimuli via an electronic display and each task configured to elicit a predicted specific cognitive and/or emotional response from the user; varying the visual stimuli to elicit the predicted specific cognitive and/or emotional response from the user ([0135] Structured Stimuli are defined hereafter as the questions to the patient in physicians' patient examination to evoke patient responses as part of differential diagnostic procedure to arrive in patient's illness prognosis. These stimuli can be questions, playing video games, or responding with verbal and task oriented computations for financial, cognition, physical dexterity measuring inputs; [0126] compare with anticipated or expected responses for standardized clinical tests based on a patient's illness and combine the information of significant biometric changes and thresholds provided by this system through the real-time data analytics.;
providing a camera filming at least one eye of the user; recording a first time series of eye movements by the user with the camera; recording each task corresponding to the first time series of eye movements by the user; wherein the first time series of eye movements and the task are taken at the same time ([0143] These systems have pairs of video [visible spectrum] and thermal infrared [IR] video camera subsystems to capture facial features, pupil size changes, eyelid flutter rates, perspiration, and facial blood flow changes; [0067] tracking pupil movements; [0135] FIG. 1 illustrates stimuli interactions with relevant main components of brain structure, nervous system responses and biometric sensors' data capture flow chart that provides an overview of the system and a method for collecting objective measurements of different biometric parameters in real time);
sending the first time series of eye movements and the task to a computing device; comparing, by the computing device, the eye movements from the first time series and the task; and identifying, by the computing device, at least one relationship between eye movements that correlate to a measurement of a sympathetic nervous system of the user ([0136] The patient's various autonomic physiological parameters [APPs] will respond and react to the induced stimuli. By utilizing biometric devices 1018, these APPs can be detected with far more precision and accuracy than is feasible through a clinician's simultaneous visual observations. Some of the APPs captured by the biometric devices include … eye movements, facial changes including color & texture changes, posture changes, muscle movements [voluntary and involuntary], and speech and tonal changes, as applicable. These biometric parameters can capture changes and severity of individual patient's feelings, emotions 1014 and theft innate resiliencies, coping skills, [healthy] behavior, function and responses 1012; data analytics 7004; [0111] FIG. 1 is a block diagram depicting the relationship between central nervous system [CNS] stimulus and the biometric reactions; [0122] The present invention links the types and degrees of intensity of stimuli [e.g., discriminating, eliciting, emotional, reinforcement, nominal, functional, or pseudo-reflex] to the nervous system's reactions, validates the diagnosis and patient progress via evaluation by experienced and trained clinicians).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 8 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Publicover (US 20150338915 A1).
Regarding claim 8, Publicover teaches the method of claim 1, wherein the eye movements comprise a change in the pupillary system which includes pupil diameter, velocity of the change in the pupil diameter, acceleration of the change in the pupil diameter, constriction latency, dilation duration, spectral features and/or iris muscle features ([0038] tracked eye movements and geometries [e.g. pupil dilation, anti-saccades] may be combined with information about the real or virtual objects being viewed by a user to discern the physiological and/or emotional states of an individual in a substantially continuous fashion – [one of ordinary skill in the art with the continuous pupil geometry data can determine the velocity/acceleration/latency/duration of the dilation and constriction to discern the physiological and/or emotional states of an individual in a substantially continuous fashion]; [0366] The positions of features on the eye, such as the iris, sclera or even pupil viewed edge-on are tracked).
Regarding claim 12, Publicover teaches the method of claim 1. While Publicover fails to specifically disclose a task configured to randomly disable a mouse click response or a screen touch response when the user was interacting with the display screen, computer malfunctions are well known for causing frustration and one of ordinary skill in the art would know how to incorporate such a task. Publicover discloses wherein the predicted specific cognitive and/or emotional response comprises frustration ([0099] an indication of the classes of objects that, for example, elicit fright, attraction, or interest can be discerned. Such information can, for example, be used to tailor subsequent displays of information; [0244] intentional eye-movements [rolling of eyes in frustration]).
Claim(s) 13 are rejected under 35 U.S.C. 103 as being unpatentable over Publicover (US 20150338915 A1) in view of Stack (US 20140154651 A1).
Regarding claim 13, Publicover teaches the method of claim 1. However, Publicover fails to disclose varying difficulty.
Stack teaches a method to determine the subject's peak cognitive performance using smooth pursuit tracking tests with varying difficulty. Stack discloses wherein the task comprises a task configured to vary the difficulty of puzzle between easy and hard ([0050] the test taker is slowly pushed to difficulty levels higher incrementally in order to get further confirmation that the difficulty level at this stage is really a representation of the most difficult stage level the patient can endure), wherein the predicted specific cognitive and/or emotional response comprises a corresponding low to high degree of cognitive load ([0054] this test is an ideal type of cognitive test because the upper bound cognitive load adapts in a relatively dynamic and variable manner to the upper bound of the cognitive load of a patient.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Publicover to include cognitive load corresponding to varying difficulty of a task as disclosed in Stack to evaluate cognitive abilities without overloading the patient (Stack [0054]).
Claim(s) 14 are rejected under 35 U.S.C. 103 as being unpatentable over Publicover (US 20150338915 A1) in view of Palti-Wasserman (US 20110109879 A1).
Regarding claim 14, Publicover teaches the method of claim 1. However, Publicover fails to disclose an opponent.
Palti-Wasserman teaches a system for profiling a personal aspect of a subject based on eye response to visual stimuli. Palti-Wasserman discloses wherein the task comprises a task configured to change an opponent condition in a subsequent task, wherein the predicted specific cognitive and/or emotional response comprises anxiety ([0028] A different application may be used in the electronic gaming industry. A player's profile may be prepared and used for the players benefit, or for his opponent to see).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Publicover to include the anxiety elicited by an opponent as disclosed in Palti-Wasserman because by calculating and displaying a player's stress level to his opponents, the game becomes more interesting and challenging (Palti-Wasserman [0028]).
Claim(s) 15 are rejected under 35 U.S.C. 103 as being unpatentable over Publicover (US 20150338915 A1) in view of Aimone (US 20160077547 A1).
Regarding claim 15, Publicover teaches the method of claim 1. However, Publicover fails to disclose an attack.
Aimone teaches a training apparatus has an input device and a wearable computing device with a bio-signal sensor and a display to provide an interactive virtual reality (“VR”) environment for a user in which the user interactions and bio-signal data are scored with a user state score and a performance scored. Aimone discloses wherein the task comprises a task configured to change the level of attack on the user, wherein the predicted specific cognitive and/or emotional response comprises stress ([0159] Dan wants to increase his performance in these games so that he can perform well under high levels of stress. The VR game presents a series of tests that put Dan under simulated stress events—i.e. a sniper attack, etc. The EEG sensors in Dan's VR headset asses his emotional/stress state during these tests and provide him with updates based on his performance. Dan's performance during these “stress tests” can unlock different achievements and new missions.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Publicover to include the stress elicited by an attack as disclosed in Aimone because these “stress tests” can provide a valuable metric for gauging how a subject will perform in real-life situations that are stressful (Aimone [0159]).
Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Publicover (US 20150338915 A1) in view of Sakamoto (US 8790280 B2).
Regarding claim 16, Publicover teaches the method of claim 1. However, Publicover fails to disclose details for camera placement beyond facing the user.
Sakamoto teaches a human state estimating device that uses eye movement. Sakamoto discloses wherein the camera is disposed at or between +45 degrees to -45 degrees in relation to a sagittal plane of the user and at or between +20 degrees to -45 degrees in relation to the transverse plane of the user (Col 15, lines 12-23, FIG. 12 is a configuration diagram of a measuring device used by the inventors for accurately measuring small eye movement among the eyeball movements. In FIG. 12, [a] is a lateral view and [b] is a bird's-eye view. This measuring device corresponds to another embodiment of the video obtaining unit 11 and the analyzing unit 12 of the human state estimating device 10 in the present invention. The optical axis of a high-speed camera 88 equipped with a high-magnification lens 33 is set on an equatorial plane. The angle created by the optical axis of the high-speed camera 38 and the visual axis when the eyeball looks to the front is 45 degrees as shown in FIG. 13).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Publicover to include camera position as disclosed in Sakamoto to accurately measuring small eye movement among the eyeball movements (Sakamoto, Col 15, lines 12-13).
Claim(s) 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Publicover (US 20150338915 A1) in view of Marci (US 20100004977 A1).
Regarding claim 18, Publicover teaches the method of claim 1. However, Publicover fails to disclose a set period for the task.
Marci teaches a method and system for measuring the biometric (physically, behaviorally, biologically and self-report based) responses of an audience to a presentation or interactive experience. Marci discloses wherein the task comprises a set time period ([0168] an event window begins when a user is presented with a screen display which involves the user in an interactive presentation, task or activity and extends for a duration of five [or in some cases, up to ten] seconds).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Publicover to include set period for the task as disclosed in Marci to collect the eye tracking, behavior and biometric response data on the 20 ms intervals, providing up to 500 data points from each sensor for the event window, which is preferable for processing (Marci [0167 and 0168]).
Regarding claim 19, Publicover teaches the method of claim 1. However, Publicover fails to disclose a set period for the task.
Marci discloses wherein the task comprises a set time period ([0168] an event window begins when a user is presented with a screen display which involves the user in an interactive presentation, task or activity and extends for a duration of five [or in some cases, up to ten] seconds).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Publicover to include a period of 10 seconds for the task as disclosed in Marci to collect the eye tracking, behavior and biometric response data on the 20 ms intervals, providing up to 500 data points from each sensor for the event window, which is preferable for processing (Marci [0167 and 0168]).
Claim(s) 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Publicover (US 20150338915 A1) in view of Najarian (US 20120123232 A1).
Regarding claim 20, Publicover teaches the method of claim 1. However, Publicover fails to disclose linear regression. Najarian teaches an apparatus to analyze heart-related electronic signals, including eye movement, to identify various states of the cardiovascular system, valuable to understanding the physical and mental stress of the patient. The combination of Publicover/Najarian discloses wherein the step of identifying, by the computing device, relationships between eye movements that correlate to the outwards events (Publicover: [0186] Eye movements and patterns can also be discerned using non-head mounted cameras including those embedded within cell phones, tablets, laptop computers, and desktop computers; [0477] Gaze patterns are then further analyzed to determine if they generally correspond to activation interactables displayed on the screen at 1634. If a match is found, then actions corresponding to the selected interactable[s] are performed at 1635) comprises linear regression computing beta weights to relate (Najarian: [0143] The algorithm includes at least one context detector 1605 that produces a weight, shown as W1 through WN… a regression algorithm 1610 is provided where a continuous prediction is computed taking raw or derived channels as input. The individual regressions can be any of a variety of regression equations or methods, including, for example, multivariate linear or polynomial regression) eye movements to cognitive and/or emotional responses (Publicover: [0618] determined conditions can include a user state [i.e., generally classified over a continuous range] such as cognitive load or a degree of fatigue; Fig. 27).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Publicover to use linear regression as disclosed in Najarian to compute a continuous estimate of the output of the parameter of interest in the algorithm (Najarian [0143]).
Regarding claim 21, Publicover teaches the method of claim 1. While Publicover discloses Bayesian statistics ([0571]), Publicover fails to specifically disclose a Bayesian network to identify non-linear patterns. The combination of Publicover/Najarian discloses wherein the step of identifying, by the computing device, relationships between eye movements that correlate to the outwards events (Publicover: [0186] Eye movements and patterns can also be discerned using non-head mounted cameras including those embedded within cell phones, tablets, laptop computers, and desktop computers; [0477] Gaze patterns are then further analyzed to determine if they generally correspond to activation interactables displayed on the screen at 1634. If a match is found, then actions corresponding to the selected interactable[s] are performed at 1635) comprises identifying non-linear patterns using Bayesian deep belief networks (Najarian: [0140] Further examples of the types of non-linear functions and/or machine learning method that may be used in the present invention include the following: conditionals, case statements, logical processing, probabilistic or logical inference, neural network processing, kernel based methods, memory-based lookup including kNN and SOMs, decision lists, decision-tree prediction, support vector machine prediction, clustering, boosted methods, cascade-correlation, Boltzmann classifiers, regression trees, case-based reasoning, Gaussians, Bayes nets, dynamic Bayesian networks).
Claim(s) 22-23 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Publicover (US 20150338915 A1) in view of Rau (US 20160022193 A1).
Regarding claim 22, Publicover teaches the method of claim 1. However, Publicover fails to disclose the type of camera used.
Rau teaches systems and methods for real-time measurement of objective, autonomic physiological parameters that allow for monitoring of mental health illnesses and emotional health changes. Rau discloses wherein the first camera is an infrared camera ([0143] These systems have pairs of video [visible spectrum] and thermal infrared [IR] video camera subsystems).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Publicover to an infrared camera as disclosed in Rau to capture facial features, pupil size changes, pupil movements, eyelid flutter rates, perspiration, and facial blood flow changes (Rau [0067] and [0143]).
Regarding claim 23, Publicover teaches the method of claim 1, wherein the first camera is a full-color camera ([0143] These systems have pairs of video [visible spectrum] and thermal infrared [IR] video camera subsystems).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Publicover to an infrared camera as disclosed in Rau to capture facial features, pupil size changes, pupil movements, eyelid flutter rates, perspiration, and facial blood flow changes (Rau [0067] and [0143]).
Regarding claim 25, Publicover teaches method of claim 1, wherein the first camera is both an infrared camera and a full-color camera ([0143] These systems have pairs of video [visible spectrum] and thermal infrared [IR] video camera subsystems).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Publicover to include an infrared and a full-color camera as disclosed in Rau to capture facial features, pupil size changes, pupil movements, eyelid flutter rates, perspiration, and facial blood flow changes (Rau [0067] and [0143]).
Claim(s) 24 is rejected under 35 U.S.C. 103 as being unpatentable over Publicover (US 20150338915 A1) in view of Rau (US 20160022193 A1), and in further view of Proença (H. Proença, "Iris Recognition: On the Segmentation of Degraded Images Acquired in the Visible Wavelength," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 8, pp. 1502-1516, Aug. 2010, doi: 10.1109/TPAMI.2009.140.).
Regarding claim 24, the combination of Publicover/Rau discloses the method of claim 23. However, the combination of Publicover/Rau fails to disclose one that converts a noisy color image into a clear infrared image. Proença teaches an iris recognition segmentation method that can handle degraded images acquired in less constrained conditions. The combination of Publicover/Proença discloses, wherein the first time series of eye movements recorded by the first camera (Publicover: [0020] one or more cameras or at least one sensor to monitor changes in the reflection of the light; [0221] Using eye position data determined from a series of eye images, algorithmic “filters” can be constructed to identify and distinguish, in real-time, the presence of a saccadic or smooth pursuit eye movements; [0583] The use of deep learning approaches to eye signals can be classified as a “time series” data analysis) comprises a noisy color image data, and now including the step of transforming, by a neural network, the noisy color image data into a clear infrared image data (Proença: pg 1502, col 2, [2], an iris segmentation technique designed specifically for degraded iris images acquired in the VW [visible wavelength] and unconstrained scenarios; Fig. 3; pg 1507, col 1, Fig. 7. Schema for the multilayered feed-forward neural networks used in both classification stages of [the] segmentation method) for the step of comparing, by the computing device, the eye movements from the first time series and the plurality of tasks (Publicover: [0477] FIG. 16 shows an overall sequence of algorithmic steps used to detect eye signals that may lead up to the performance of an action. Images of an eye are acquired and analyzed at 1630 to determine gaze pathways … Filters at 1632 are applied to all series of eye positions to determine whether primarily saccadic or smooth pursuit [or vergence or vestibulo-ocular eye movements are present. If matches to these forms of voluntary eye movement are found, then gaze pathways are further compared with screen positions at 1633 to determine if regions corresponding to interactables or real objects are being viewed. Gaze patterns are then further analyzed to determine if they generally correspond to activation interactables displayed on the screen at 1634. If a match is found, then actions corresponding to the selected interactable[s] are performed at 1635).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Publicover/Rau to convert a noisy color image into a clear infrared image as disclosed in Proença to enable the capture of a much higher level of detail afforded by visible wavelengths despite the captured noise and artifacts (Proença, pg 1502, col 2, [2]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOLLY HALPRIN whose telephone number is (703)756-1520. The examiner can normally be reached 12PM-8PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert (Tse) Chen can be reached on (571) 272-3672. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.H./Examiner, Art Unit 3791
/DEVIN B HENSON/Primary Examiner, Art Unit 3791