DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/23/2026 has been entered.
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. For the purposes of examination, the priority claims to EA/201700326 and PCT/EA2017000004 are acknowledged for Claims 1-3 & 6. The filing date of the instance application for Claims 1-3 & 6 is 07/18/2017.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim(s) 1-3 & 6 is/are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 was amended to recite, “only utilize the integrated camera…and execute programmed software instructions”. The Applicant in the Remarks, filed 02/23/2026, provided support for most of the amendments. However, the Examiner was unable to find support for “only utilize the integrated camera”. In fact the Applicant on Page 7 of the Remarks, filed 02/23/2026, cites the Specification of the Applicant as “a telemetry monitoring system, that includes a camera and interchangeable auxiliary sensors for input, that monitors, reads and measures signals and parameters of a subject's vital signs. The invention does this by analyzing: acoustic, visual, pyrometric, spectroscopic data, skin surface temperature, sensorimotor reaction, motor activity, pscychoemotional and mental state analysis, pupil and sclera discoloration as well as the composition of exhaled air evaluation to determine clinical signs of drug and alcohol intoxication” (Page 7 of the Specification as originally filed). The Examiner contends that the amendment of “only utilize the integrated camera” fails to comply with the written description requirement as the Specification supports using more than only a camera in the analysis.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-3 & 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Toth et al. (U.S. Patent Application 2017/0231490 A1) and further in view of Tzvieli et al. (U.S. Patent Application 2017/0367651 A1).
Claim 1: Toth teaches –
A software [method for analyzing] (Abstract) and hardware system [system including a head mounted display] (Abstract) for telemetric monitoring of a subject [to monitor one or more physiologic signals from the face or head of the subject] (Abstract) comprising of:
an electronic computing device of the subject [feedback/user device] (Figure 1a, Element 147) [a head mounted display (HMD)] (Figure 1a, Element 140) [patch/module pairs] (Figure 1a, Element 5-137);
having an integrated camera [ocular assessment cameras] [one or more back facing cameras] (Para 0257) with a lens [HMD includes a lens] (Para 0315),
having a processor [processor] (Para 0089) therein [a processor included in or coupled to the head mounted display] (Para 0061);
having an audio input [the head mounted display may include one or more audio input devices] (Para 0042) and an audio output thereon [the HMD may be configured to deliver an audio/visual presentation to the subject] (Para 0129); and
Examiner’s Note: The electronic computing device of the subject consists of the user device, the head mounted display and the patches. All of these elements are used to interact with the subject.
configured, when the lens is directed at the subject to capture the subject’s face [The field of view of the back-facing imaging sensor 605 may be adjusted to compensate for the presence of the lens 611 when interfacing with the subject] (Para 0315) and when the lens is separated a device-determined distance “d” from the subject’s face (See Figure 6a; because the user is wearing the device in a HMD the device keeps the camera at a device-determined distance from the face), to have the processor [processor] (Para 0089) of the electronic computing device of the subject [feedback/user device] (Figure 1a, Element 147) [a head mounted display (HMD)] (Figure 1a, Element 140) [patch/module pairs] (Figure 1a, Element 5-137) operably configured to:
only utilize the integrated camera [ocular assessment cameras] [one or more back facing cameras] (Para 0257) directed at the subject’s face [The HMD 140 may include one or more back facing cameras…so as to image…a skin site, a facial patch of skin, features thereof, or the like of the subject] (Para 0257) and execute programmed software instructions to [a processor coupled thereto programmed with machine readable code] (Para 0091) have a first facial analysis on the subject and a second facial analysis [the monitored baseline physiologic responses being used for comparison with future test results, previous test results, a procedural outcome, a patient population, etc] (Para 0287; wherein the baseline is the first and the comparison to the baseline is the second) on the subject with the device-determined distance “d” being the same (See Figure 6a; because the user is wearing the device in a HMD the device keeps the camera at a device-determined distance from the face), the first and second analysis (Para 0287; wherein the baseline is the first and the comparison to the baseline is the second) including
Examiner’s Note: The integrated camera is the only sensor for the measuring of spectral changes, conducts a pupil assessment and the other claimed processes are performed by the camera only. Just due to the nature of needing a camera to carry out the elements.
measure spectral changes associated with the subject [The LEDs may be configured so as to generate light fields with varying intensity, spectral content] (Para 0257), including color changes in facial skin [physiologic parameters that may be measured…skin response (e.g. …color change, etc.)] (Para 0180) and the mucous membranes tissues of the subject [color change analysis of underlying tissue] (Para 0183),
conduct a pupil assessment [capture ocular parameters, pupil diameters, iris features, iris surface properties, uvea features, uvea blood flow, iris tonal changes, uvea tonal changes, uvea color changes, facial expressions, eye movements, combination thereof, or the like from the subject during use] (Para 0091),
measure sensorimotor reaction of the subject and motor activity of the subject [facial expressions that may be monitored include all facial expressions, monitoring mouth, eye, neck, and jaw muscles, smiling, frowning, voluntary and involuntary muscle contractions, tissue positioning and gestural activity, twitching, blinking, eye movement, saccade, asymmetrical movements, patterned head movement, rotating or nodding, head positioning relative to the body, vocal cord tension and resulting tonality, vocal volume (decibels), and speed of speech] (Para 0096), and
measure blood perfusion of the subject’s face [configured to fix the HMD to the face of the subject, etc…to monitor a real-time blood perfusion parameter in a tissue of the subject…to monitor a local blood perfusion parameter in the tissue of the subject] (Para 0193); and
process data received from the measurements obtained from a comparison of the first and second facial analysis [the monitored baseline physiologic responses being used for comparison with future test results, previous test results, a procedural outcome, a patient population, etc] (Para 0287; wherein the baseline is the first and the comparison to the baseline is the second) of the subject [the HMD or a processor coupled thereto programmed with machine readable code and/or imaging algorithms configured so as to analyze inputs from the plurality of cameras and assemble an array of subject responses, physiologic metrics, or the like during use] (Para 0091) and to generate recommendations [configured to coordinate information exchange to/from each module and/or patch, and to generate one or more physiologic signals,… alerts, reports, recommendation signals, commands, combinations thereof, or the like for the subject, a user, a network, an EHR, a database…or the like] (Para 0089) for a medical practitioner to consider via a wireless network [configured for wireless communication] (Para 0256) with an electronic computing device of a medical practitioner [host device] (Figure 1a, Element 145) communicatively coupled to the electronic computing device of the subject [feedback/user device] (Figure 1a, Element 147) [a head mounted display (HMD)] (Figure 1a, Element 140) [patch/module pairs] (Figure 1a, Element 5-137).
Toth fails to teach clustering a subject’s face into quadrants. However, Tzvieli teaches –
clustering the subject’s face into quadrants [measurements of a fourth ROI (THROI4), where ROI4 covers a portion of the left side of the user's upper lip] (Para 0119 & 0101) in order to use a machine learning-based model to detect a physiological response based on similarity in measurements between the regions (Para 0100-0101)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the electronic computing device of Toth to include the clustering as taught by Tzvieli in order to use a machine learning-based model to detect a physiological response based on similarity in measurements between the regions (Para 0100-0101).
Claim 2/1: Toth teaches wherein the computing device is operably configured to receive video steam [The visual input device 505a,b, the back facing imaging sensors 507a,b,…are coupled to one or more processors] (Para 0297) [The processors 511a,b may coordinate receipt of one or more images, video streams, etc. from the front-facing imaging sensors 509a,b, the back facing imaging sensors 507a,b,] (Para 00297) from the integrated camera [ocular assessment cameras] [one or more back facing cameras] (Para 0257) to measure the spectral changes associated with the subject [The LEDs may be configured so as to generate light fields with varying intensity, spectral content] (Para 0257) [physiologic parameters that may be measured…skin response (e.g. …color change, etc.)] (Para 0180) [color change analysis of underlying tissue] (Para 0183).
Claim 3/2/1: Toth teaches wherein the computing device of the subject further comprises a graphic information input device [feedback/user device] (Figure 1a, Element 147) is operably configured to measure the spectral changes associated with the subject [The LEDs may be configured so as to generate light fields with varying intensity, spectral content] (Para 0257) [physiologic parameters that may be measured…skin response (e.g. …color change, etc.)] (Para 0180) [color change analysis of underlying tissue] (Para 0183).
Toth fails to teach to split the video stream into separate frames and filter pixels in each frame. However, Tzvieli teaches –
to split the video stream into separate frames and filter pixels in each frame [the values measured…may undergo various forms of filtering and/or normalization] (Para 0258) [some feature values may be average values of certain sensors (pixels) of the first or second thermal cameras during the certain period] (Para 0258) [feature values…taken during the certain period may include time series data] (Para 0258) [feature values may be values measured at certain times during the certain period by the certain sensors] (Para 0258) in order to obtain the reduce outliners through filtering and normalizing the data thereby reducing noise in the data (Para 0256 & 0258).
Examiner’s Note: Although the prior art does not specifically state splitting into frames, it is understood that is occurring in order for the disclosed pixel value filtering to occur as described. The prior art teaches filtering by only taking values are certain times during certain periods by certain sensors (pixels). The disclosure of time series data means that the pixels are analyzed for their values in the individual frames and put into a time series.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify computing device of Toth to include the filtering as taught by Tzvieli in order to obtain the reduce outliners through filtering and normalizing the data thereby reducing noise in the data (Para 0256 & 0258).
Claim 6/5/4/3/2/1: Toth wherein at least one sensor [HMD] is operably configured to measure psychoemotional and mental state of the subject [the HMD 500 may be configured to provide one or more aspects of a visual presentation to the subject as part of a stress test in accordance with the present disclosure. The HMD 500 may be configured to monitor one or more physiologic parameters, facial parameters, ocular parameters, etc. in accordance with the present disclosure during use. Such information may be useful for determining the response of the subject to a stress test, a procedure, for patient selection purposes, as part of a gaming experience, to determine an emotional response of the subject to a suggestion, or the like] (Para 0299).
Response to Arguments
Applicant's arguments filed 03/23/2026 have been fully considered but they are not persuasive. The Applicant contends that Toth is directed to a HMD with sensor/patches that are spread over the user’s body. The Applicant contends that Toth fails to teach or suggest a system whereby only a single camera is obtaining the claimed data/measurements. The Examiner respectfully disagrees. Toth uses a camera to obtain the claimed data/measurements as recited above similar to the invention of the Applicant. Additionally the claim limitation of “only utilize the integrated camera” is not fully supported by the written description requirement as the Applicant also uses auxiliary sensors. The rejection above has been amended to point out how Toth reads on the new claim limitations as amended. The arguments are unconvincing.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Tadi et al. (U.S. Patent 10,521,014 B2) – Tadi teaches systems, methods and apparatuses for detecting muscle activity, and in particular, to systems, methods and apparatuses for detecting facial expression according to muscle activity, including for a virtual or augmented reality (AR/VR) system, as well as such a system using simultaneous localization and mapping (SLAM).
Aratow et al. (U.S. Patent Application 2019/0385711 A1) – Aratow teaches method involves presenting or formulating a query based on multiple target behavioral health states of a subject, where the query is configured to elicit a response from the subject. The query is transmitted in an audio, visual, or textual format to the subject to elicit the response. The data comprising the response from the subject is received in response to transmitting the query, where the data comprises speech data. The data is processed using multiple models to yield processed data, where the models comprise a natural language processing (NLP) model (2214), an acoustic model and a visual model. The processed data is utilized to generate analytics data of the behavioral health state associated with the subject.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HELENE C BOR whose telephone number is (571)272-2947. The examiner can normally be reached Mon - Fri 10:30 - 6:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Koharski can be reached at (571) 272-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Helene Bor/Examiner, Art Unit 3797
/CHRISTOPHER KOHARSKI/Supervisory Patent Examiner, Art Unit 3797