DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3 and 9-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tegreene, U.S. Publication No. 2013/0139259 in view of Yu et al, “Remote Photoplethysmograph Signal Measurement from Facial Videos Using Spatio-Temporal Networks” (published at https://arxiv.org/abs/1905.02419, July 2019).
Regarding claim 9, Tegreene teaches a system for detecting deception of a subject from a media stream (see Tegreene Figure 1), the system comprising:
a processor; and a memory storing one or more programs for execution by the processor, the one or more programs including instructions (see paragraph [0217]) for:
capturing a media stream of the subject, the media stream including a sequence of frames (see Figure 1, physiological data capture device 112 and communications content capture device 101 and paragraphs [0027] and [0031]);
processing each frame of the media stream to track a plurality of biometrics (see Figure 5, biometrics 502-508 and Figure 6, biometrics 602-608 and paragraphs [0175] and [0176]); and
determining whether the subject in the media stream is deceptive based upon changes to respective biometrics (see Figures 5 and 6, step 408 and paragraph [0178]).
Tegreene does not expressively teach utilizing a 3-dimensional convolutional neural network (3DCNN) to perform remote photoplethysmography (rPPG) extraction to determine at least a subset of the plurality of biometrics.
However, Yu in a similar invention in the same field of endeavor teaches a system for capturing a media stream of a subject, the media stream including a sequence of frames (see Yu Figure 2a, images 1 to T and page 4, “The overall architecture of PhysNet is shown in Figure 2. The input of the network is T-frame face images with RGB channels”) processing each frame of the media stream to track a subset of the biometrics (see Abstract) as taught in Tegreene (see Tegreene Figure 6, step 602) further configured for
utilizing a 3-dimensional convolutional neural network (3DCNN) to perform remote photoplethysmography (rPPG) extraction to determine the subset of the biometrics (see Yu, caption for Figure 2 and Abstract).
One of ordinary skill in the art before the effective filing date of the invention would have found it obvious to combine the teaching of using a 3DCNN to perform rPPG for tracking a biometric as taught in Yu with the system taught in Tegreene, the motivation being to achieve finer and therefore more accurate biometric measurements (see Yu section 2.2).
Method claim 1 recites similar limitations as claim 9, and is rejected under similar rationale.
Regarding claim 10, Tegreene in view of Yu teaches all the limitations of claim 9, and further teaches wherein the media stream includes one or more of a visible-light video stream, a near-infrared video stream, a longwave-infrared video stream, a thermal video stream, and an audio stream of the subject (see Tegreene paragraphs [0027] and [0031]).
Method claim 2 recites similar limitations as claim 10, and is rejected under similar rationale.
Regarding claim 11, Tegreene in view of Yu teaches all the limitations of claim 9, and further teaches wherein the plurality of biometrics includes two or more of pulse rate, eye gaze (see Tegreene Figure 5, step 506 and paragraph [0184]), eye blink rate, pupil diameter (see Tegreene Figure 5, step 504 and paragraph [0181]), face temperature, speech, and micro- expressions (see Tegreene Figure 5, step 502 and paragraph [0180]).
Method claim 3 recites similar limitations as claim 11, and is rejected under similar rationale.
Claims 4 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Tegreene, U.S. Publication No. 2013/0139259 in view of Yu et al, “Remote Photoplethysmograph Signal Measurement from Facial Videos Using Spatio-Temporal Networks” (published at https://arxiv.org/abs/1905.02419, July 2019) and Cuestas Rodriguez, U.S. Publication No. 2020/0383621.
Regarding claim 12, Tegreene in view of Yu teaches all the limitations of claim 9, and further teaches wherein the plurality of biometrics includes heart rate (see Tegreene Figure 6, step 604 wherein “heart rate” is used in the art synonymously with “pulse rate”).
Tegreene in view of Yu does not expressively teach wherein the plurality of biometrics includes pupil diameter and face temperature.
However, Cuestas Rodriguez in a similar invention in the same field of endeavor teaches a system for detecting deception of a subject from a media stream comprising a processor (see Cuestas Rodriguez Figure 3, processing module 170) for: capturing a media stream of the subject, the media stream including a sequence of frames; processing each frame of the media stream to track a plurality of biometrics; and determining whether the subject in the media stream is deceptive based upon changes to respective biometrics, wherein the plurality of biometrics includes heart rate (see Figure 3, modules 110-150 and paragraph [0045]) as taught in Tegreene in view of Yu
wherein the plurality of biometrics includes pupil diameter and face temperature (see paragraph [0045]).
One of ordinary skill in the art before the effective filing date of the invention would have found it obvious as a matter of simple substitution to replace the biometrics taught in Tegreene in view of Yu with those taught in Cuestas Rodriguez, to yield the predictable results of successfully determining deception in a person.
Method claim 4 recites similar limitations as claim 12, and is rejected under similar rationale.
Claims 5 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Tegreene, U.S. Publication No. 2013/0139259 in view of Yu et al, “Remote Photoplethysmograph Signal Measurement from Facial Videos Using Spatio-Temporal Networks” (published at https://arxiv.org/abs/1905.02419, July 2019) and Pavlidis, U.S. Publication No. 2003/0012253.
Regarding claim 13, Tegreene in view of Yu teaches all the limitations of claim 9, but does not expressively teach cropping each frame of the media stream to encapsulate a region of interest that includes one or more of a face, cheek, forehead, or an eye.
However, Pavlidis in a similar invention in the same field of endeavor teaches a system for detecting deception of a subject from a media stream (see Pavlidis Figure 1), the system comprising: a processor; and a memory storing one or more programs for execution by the processor, the one or more programs including instructions (see paragraph Figure 1, computing apparatus 14 and paragraph [0051]) for: capturing a media stream of the subject, the media stream including a sequence of frames (see Figure 1, thermal camera 12 and paragraph [0060]);processing each frame of the media stream to track a biometric (see Figure 4, steps 52 and 54); and determining whether the subject in the media stream is deceptive based upon changes to the biometric (see Figure 4, step 56 and paragraph [0063]) as taught in Tegreene in view of Yu and further teaches
cropping each frame of the media stream to encapsulate a region of interest that includes one or more of a face, cheek, forehead, or an eye (see paragraph [0120]).
One of ordinary skill in the art before the effective filing date of the invention would have found it obvious to combine the teaching of cropping each frame to include a region of interest as taught in Pavlidis with the system taught in Tegreene in view of Yu, the motivation being to eliminate background which will not skew the results.
Method claim 5 recites similar limitations as claim 13, and is rejected under similar rationale.
Claims 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Tegreene, U.S. Publication No. 2013/0139259 in view of Yu et al, “Remote Photoplethysmograph Signal Measurement from Facial Videos Using Spatio-Temporal Networks” (published at https://arxiv.org/abs/1905.02419, July 2019); Pavlidis, U.S. Publication No. 2003/0012253 and Farag et al, U.S. Publication No. 2008/0045847.
Regarding claim 14, Tegreene in view of Yu and Pavlidis teaches all the limitations of claim 13, but does not expressively teach wherein the region of interest includes two or more body parts.
However, Farag in a similar invention in the same field of endeavor teaches a system for detecting deception of a subject from a media stream (see Farag Figure 12 and Abstract), the system comprising: a processor; and a memory storing one or more programs for execution by the processor, the one or more programs including instructions (see paragraph [0035]) for: capturing a media stream of the subject, the media stream including a sequence of frames (see Figure 12, thermal IR camera and paragraph [0063]); processing each frame of the media stream to track a biometric; and determining whether the subject in the media stream is deceptive based upon changes to the biometric (see paragraph [0040]); and tracking an region of interest including a face (see Figure 12, step s5 and paragraph [0020]) as taught in Tegreene in view of Yu Pavlidis wherein
the region of interest includes two or more body parts (see Figure 7).
One of ordinary skill in the art before the effective filing date of the invention would have found it obvious to combine the teaching of including two body parts (i.e. head and neck) for calculating a biometric as taught in Farag with the system cropping a region of interest for measuring biometrics as taught in Tegreene in view of Yu and Pavlidis, the motivation being to have redundant measurements to improve accuracy for the biometric measurement.
Method claim 6 recites similar limitations as claim 14, and is rejected under similar rationale.
Claims 7, 8, 15, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Tegreene, U.S. Publication No. 2013/0139259 in view of Yu et al, “Remote Photoplethysmograph Signal Measurement from Facial Videos Using Spatio-Temporal Networks” (published at https://arxiv.org/abs/1905.02419, July 2019) and Sebe et al, U.S. Publication No. 2017/0367590.
Regarding claim 15, Tegreene in view of Yu teaches all the limitations of claim 9, and further teaches using at least two of a visible-light video stream (see Tegreene paragraph [0027]), a near-infrared video stream (see Tegreene paragraph [0034]), and a thermal video stream.
Tegreeene in view of Yu does not expressively teach combining at least two of a visible-light video stream, a near-infrared video stream, and a thermal video stream into a fused video stream.
However, Sebe in a similar invention in the same field of endeavor teaches a system for detecting deception of a subject from a media stream (see Sebe paragraph [0136]), the system comprising: a processor; and a memory storing one or more programs for execution by the processor, the one or more programs including instructions (see paragraph [0139]) for: capturing a media stream of the subject, the media stream including a sequence of frames; processing each frame of the media stream to track a plurality of biometrics (see paragraph [0105]) as taught in Tegreene in view of Yu further configured for
combining at least two of a visible-light video stream, a near-infrared video stream, and a thermal video stream into a fused video stream (see paragraph [0105]).
One of ordinary skill in the art before the effective filing date of the invention would have found it obvious to combine the teaching of fusing streams as taught in Sebe with the system taught in Tegreene in view of Yu, the motivation being to increase the accuracy of the combined deception detection biometrics by ensuring there is no delay between them which would skew the results.
Method claim 7 recites similar limitations as claim 15, and is rejected under similar rationale.
Regarding claim 16, Tegreene in view of Yu and Sebe teaches all the limitations of claim 15, and further teaches wherein the visible-light video stream, the near-infrared video stream, and/or the thermal video stream are combined according to a synchronization device (see Sebe paragraph [0105]).
Method claim 8 recites similar limitations as claim 16, and is rejected under similar rationale.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1 and 9 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CASEY L KRETZER whose telephone number is (571)272-5639. The examiner can normally be reached M-F 10:00-7:00 PM Pacific Time.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Payne can be reached at (571)272-3024. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CASEY L KRETZER/Primary Examiner, Art Unit 2635