Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claim(s) 1 - 15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 5, 10 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Feng (Pub 20170091550) in view of Smits (Pub 20230074718) in view of Bazakos (Pub 20100239119).
Regarding claims 1 and 10, Feng discloses:
a light source configured to emit IR or near IR light (Para. [0033] Led flash 155 can include NIR diode)
and an RGB-IR sensor having Red, Blue, Green and Infrared dyes arranged in a modified Bayer pattern, (Para. [0032] camera may include RGB-IR sensor) the method comprising,
in a first time frame: acquiring a first image of a target area using said RGB-IR sensor during illumination by the light source, said first image including a set of IR pixels containing information from said IR dyes, and a set of RGB pixels containing information from said red, blue and green dyes, respectively, (see capturing NIR image data of IRIS using RGB-IR sensor 425 fig 4A after NIR flash LED illumination 415 fig 4A)
and cropping for selecting an eye region in said first image based on the head tracking features, (Para. [0111]) and performing eye tracking using said spatially enhanced IR image, (see generating fused NIR image to generate high quality iris image frame 234 and iris tracking using learning data repository Para. [0039]).
However, performing demosaicing of the eye region using IR pixels and RGB pixels in the eye region, to form a spatially enhanced IR image of said eye region is not disclosed.
In a similar field of endeavor, Smits discloses performing demosaicing of the eye region using IR pixels and RGB pixels in the eye region, to form a spatially enhanced IR image of said eye region, ([0050] see image processor 200 configured to generate IR image by processing color and IR pixels. The infrared image may be generated by demosaicing color and IR pixels as illustrated with respect to steps S3-S4 Fig 5A and Para. [0052][0088][0090-0104]. The demosaicing may be related to eye pupil tracking, Para. [0007]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Feng by Smits for the benefit of leveraging the filtering provided by Smits thereby eliminating noise in generated infrared images thereby improving the generated infrared images.
Examiner notes Feng does not explicitly disclose IR, red, blue, and green dyes. However, examiner takes Official Notice that the use of dyes is well known and conventional in the art. One of ordinary skill in the art would include such dyes in order to collect light information and generate images according to design choice such as generating IR and color images, see for example (Price Pub 20180176487 Para. [0020]). It would have been obvious to one of ordinary skill in the art to modify Ozone by including dyes since using dyes with infrared illumination generates better quality images.
Further, identifying head tracking features and performing head tracking using said first image identifying an eye region in said first image based on the head tracking features, and performing eye tracking using said spatially enhanced IR image are not disclosed.
In a similar field of endeavor, Bazakos discloses identifying head tracking features and performing head tracking using said first image, (head/face tracking based on distance and determining eye location Para. [0012])
and identifying an eye region in said first image based on the head tracking features, (locating eyes in tracked/detected faces Para. [0012] and tracking iris location Para. [0014] and features in Para. [0016]). One of ordinary skill in the art would have incorporated the tracking features of Bazakos for use by the iris recognition of Ozone for determining eye regions when using a monitoring camera Para. [0002] Ozone. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Ozone by Bazakos for the benefit of determining eye regions.
Regarding claims 5 and 12, Ozone discloses wherein the red, green and blue dyes are configured to also detect infrared light, (Para. [0065]. Examiner notes it is conventional to use dyes to form Bayer filter patterns).
Claim(s) 2 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Feng (Pub 20170091550) in view of Smits (Pub 20230074718) in view of Bazakos (Pub 20100239119) in view of Nair (Pub 20190052855).
Regarding claims 2 and 11, the combination discloses demosaicing an image per claim 1. However, a convolutional network is not disclosed.
In a similar field of endeavor, Nair discloses wherein the step of performing demosaicing includes applying a convolutional network to said RGB-IR image frame, said convolutional network being trained to predict an enhanced IR-image given an RGB-IR image, (see demosaicing i.e. extracting illuminant features of pixels 106 fig 1 and passing the illuminant features through a CCN 108 fig 1 to generate smoothed grid samples of NIR raw images in step 104 fig 1). One of ordinary skill in the art would have included the convolutional network of Nair to demosaic images of Ozone so that pixel samples are smoothed before high frequency components are filtered, see (Para. [0046] of Niar and Para. [0154]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the combination by incorporating Nair for the benefit of generating smooth high resolution infrared images.
Claim(s) 3 is rejected under 35 U.S.C. 103 as being unpatentable over Feng (Pub 20170091550) in view of Smits (Pub 20230074718) in view of Bazakos (Pub 20100239119) in view of Nair (Pub 20190052855) in view of Chen (Pub 20200342360).
Regarding claim 3, the combination discloses a convolution layer per claim 2. However, plurality of layers and kernels are not disclosed.
In a similar field, Chen discloses wherein the convolutional network has a plurality of layers, preferably more than three layers, and a minimum 3×3 kernel, (Para. [0056] for image recognition). One of ordinary skill in the art would have included the multilayers and kernels of Chen into the convolution network of Nair so that image features can be tracked when demosaicing image regions such as disclosed by Nair. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the combination by incorporating Chen for the benefit of using convolution network for tracking image features using different layers and kernels thereby allowing image recognition of image features.
Claim(s) 4 is rejected under 35 U.S.C. 103 as being unpatentable over Feng (Pub 20170091550) in view of Smits (Pub 20230074718) in view of Bazakos (Pub 20100239119) in view of Nair (Pub 20190052855) in view of Zhang (U.S. 11194997).
Regarding claim 4, the combination discloses claim 2. However, image training
using an actual IR image is not disclosed.
In a similar field of endeavor, Zhang discloses wherein the convolutional network
has been trained using an actual IR image, (col. 9 lines 25-28). One of ordinary skill in the art would have included the IR image training of Zhang into the combination in the convolution network of Nair so that an output image can be obtained based on the trained features. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the convolution network of the combination by training the convolution network using IR images so that features of the IR images can be output.
Claim(s) 6 is rejected under 35 U.S.C. 103 as being unpatentable over Feng (Pub 20170091550) in view of Smits (Pub 20230074718) in view of Bazakos (Pub 20100239119) in view of Ren (Pub 20220114834).
Regarding claim 6, the combination discloses claim 1, including iris tracking
using an IR image. However, performing head tracking using IR pixels is not explicitly disclosed.
In a similar field of Ren discloses wherein only said IR pixels are used for
identifying head tracking features and performing head tracking., (Par. [0070]). One of ordinary skill in the art would have incorporated the head tracking using IR pixels of Ren in the monitoring camera of Ozone, Para. [0011] so that iris recognition can be performed. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the combination by Ren for the benefit of tracking head features thereby identifying iris regions.
Claim(s) 7 is rejected under 35 U.S.C. 103 as being unpatentable over Feng (Pub 20170091550) in view of Smits (Pub 20230074718) in view of Bazakos (Pub 20100239119) in view of Price (Pub 20180197275).
Regarding claim 7, the combination discloses claim 1 including iris tracking.
However, using full resolution images comprising IR and RGB pixels for identifying/tracking head features is not disclosed.
In a similar field of endeavor, Price discloses wherein said IR pixels and said RGB pixels are used to provide a full resolution image, said full resolution image used for identifying head tracking features and performing head tracking, (Para. [0038]-0039 and Para. [0019]). One of ordinary skill in the art would have included the head tracking of Price including the IR-RGB pixels into the camera monitoring of Ozone which identifies pixels for iris recognition using RGB-IR pixels. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the combination by incorporating Price for the benefit of tracking head features before iris recognition thereby iris regions can be identified.
Claim(s) 8 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Feng (Pub 20170091550) in view of Smits (Pub 20230074718) in view of Bazakos (Pub 20100239119) in view of Thavalengal (Pub 20170124394).
Regarding claims 8 and 13, the combination discloses claim 8. However, eyelid
detectionis not disclosed.
In a similar field of endeavor, Thavalengal discloses further comprising
performing eye lid detection using said spatially enhanced IR image, (see IR image is spatially enhanced at 208 fig 2 and Para. [0067] where the multispectral image may be used for liveness detection including eyelid movements Para. [0042]). One of ordinary skill in the art would have incorporated the eyelid detection of Thavalengal in the camera monitoring of Ozone or eye tracking of Bakzos for ascertaining liveliness of captured eye/iris regions. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the combination by incorporating Thavalengal for the benefit of identifying iris regions.
Claim(s) 9 is rejected under 35 U.S.C. 103 as being unpatentable over Feng (Pub 20170091550) in view of Smits (Pub 20230074718) in view of Bazakos (Pub 20100239119) in view of Hoyos (Pub 20160117544).
Regarding claim 9, Ozone discloses using an IRGB-IR sensor and demosaicing
to form a spatially enhanced image of the eye regions, (see fig 2 see either A11 or A12 and demosaicing including R, B, and G/IR pixels in Para. [0154] also note enhancing eye regions by white balancing Para. [0150] [0153] S19 and Par. [0194-0195] and step S13-S19 and S22Fig 9).
However, identifying head tracking features and performing head tracking using said first image identifying an eye region in said first image based on the head tracking features, and performing eye tracking using said spatially enhanced IR image are not disclosed.
In a similar field of endeavor, Bazakos discloses identifying head tracking features and performing head tracking using said first image, (head/face tracking based on distance and determining eye location Para. [0012])
and identifying an eye region in said first image based on the head tracking features, (locating eyes in tracked/detected faces Para. [0012] and tracking iris location Para. [0014] and features in Para. [0016]). One of ordinary skill in the art would have incorporated the tracking features of Bazakos for use by the iris recognition of Ozone for determining eye regions when using a monitoring camera Para. [0002] Ozone. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Ozone by Bazakos for the benefit of determining eye regions.
However, capturing a second image in a second time frame without illumination, identifying an eye regions and performing eye lid detection are not disclosed by the combination.
In a similar field of endeavor, Hoyos discloses acquiring, in a second time frame,
a second image of the target area using said RGB-IR sensor without illumination by said light source, and performing eye lid detection using said second spatially enhanced image, (lid detection 810, repeating image capturing 945 into 915 without illumination to identify eye regions 925 fig 9a). One of ordinary skill in the art would have included the second capturing of images without illumination for eye lid detection as disclosed by Hoyos into the combination. For example, Ozone would have benefited from Hoyos for identifying eye regions by comparing images captured during ambient light while using IR illumination. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the combination by Hoyos for the benefit of selecting images having eye regions comprising adequate brightness.
Claim(s) 14 is rejected under 35 U.S.C. 103 as being unpatentable over Feng (Pub 20170091550) in view of Smits (Pub 20230074718) in view of Bazakos (Pub 20100239119) in view of Thavalengal (Pub 20170124394) in view of Hoyos (Pub 20160117544).
Regarding claim 9, Ozone discloses using an IRGB-IR sensor and demosaicing
to form a spatially enhanced image of the eye regions, (see fig 2 see either A11 or A12 and demosaicing including R, B, and G/IR pixels in Para. [0154] also note enhancing eye regions by white balancing Para. [0150] [0153] S19 and Par. [0194-0195] and step S13-S19 and S22Fig 9).
However, identifying head tracking features and performing head tracking using said first image identifying an eye region in said first image based on the head tracking features, and performing eye tracking using said spatially enhanced IR image are not disclosed.
In a similar field of endeavor, Bazakos discloses identifying head tracking features and performing head tracking using said first image, (head/face tracking based on distance and determining eye location Para. [0012])
and identifying an eye region in said first image based on the head tracking features, (locating eyes in tracked/detected faces Para. [0012] and tracking iris location Para. [0014] and features in Para. [0016]). One of ordinary skill in the art would have incorporated the tracking features of Bazakos for use by the iris recognition of Ozone for determining eye regions when using a monitoring camera Para. [0002] Ozone. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Ozone by Bazakos for the benefit of determining eye regions.
However, capturing a second image in a second time frame without illumination, identifying an eye regions and performing eye lid detection are not disclosed by the combination.
In a similar field of endeavor, Hoyos discloses acquiring, in a second time frame,
a second image of the target area using said RGB-IR sensor without illumination by said light source, and performing eye lid detection using said second spatially enhanced image, (lid detection 810, repeating image capturing 945 into 915 without illumination to identify eye regions 925 fig 9a). One of ordinary skill in the art would have included the second capturing of images without illumination for eye lid detection as disclosed by Hoyos into the combination. For example, Ozone would have benefited from Hoyos for identifying eye regions by comparing images captured during ambient light while using IR illumination. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the combination by Hoyos for the benefit of selecting images having eye regions comprising adequate brightness.
Claim(s) 15 is rejected under 35 U.S.C. 103 as being unpatentable over Feng (Pub 20170091550) in view of Smits (Pub 20230074718) in view of Bazakos (Pub 20100239119) in view of Linden (Pub 20240187738).
Regarding claim 15, the combination discloses claim 10. However, global/rolling
shutters are not disclosed.
In a similar field of endeavor, Linden discloses the use of rolling shutter or global shutter, (Para. [0015]). One of ordinary skill in the art would include rolling and/or global shutter in iris tracking identification/tracking such as disclosed by the combination so that proper illumination of iris/eyes can be obtained. It would have been obvious to one of ordinary sill in the art before the effective filing date of the invention to modify the combination by Linden for the benefit of generating clear images thus enabling tracking of iris/eye regions.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUMAM M SATTI whose telephone number is (571)270-1709. The examiner can normally be reached Mon-Fri.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Miller can be reached at (571)272-7353. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
HUMAM M. SATTI
Examiner
Art Unit 2422
/JOHN W MILLER/Supervisory Patent Examiner, Art Unit 2422