DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgement is made of Applicant’s claim of priority to U.S. Provisional Application No. 63/524,500, filed June 30, 2023.
Drawings
The 5-page drawings have been considered and placed on record in the file.
Status of Claims
Claims 1-20 are pending.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
a. Determining the scope and contents of the prior art.
b. Ascertaining the differences between the prior art and the claims at issue.
c. Resolving the level of ordinary skill in the pertinent art.
d. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3, 8-10, and 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Han et al. (“Noise-Robust Pupil Center Detection Through CNN-Based Segmentation With Shape-Prior Loss”) in view of Gebauer et al. (US 2020/0348755).
Consider Claim 1 (and similarly Claims 8 and 15), Han discloses “A computer-implemented method, comprising: obtaining a set of labeled pupil images, wherein a respective labeled pupil image comprises a pupil-segmentation label and a pupil-center-position label” (Han, Figures 2 and 3 and Page 64740, right column, 4th paragraph wherein it is disclosed the labeling of the segmentation map of the pupil area and pupil center position. In addition, Page 3, left column, item “3)”, wherein it is disclosed “We make a new dataset for the pupil segmentation and
also add the annotations to the existing IR eye image datasets”);
“constructing a multitask machine learning model that comprises a first branch for performing a pupil-region segmentation task (Han, Page 64740, right column, last paragraph) “
training the multitask machine learning model using the set of labeled pupil images” (Han, Page 64742, right column, III. Proposed Method, wherein it is disclosed: “we perform the segmentation using the sigmoid output obtained from the network, i.e., we threshold the magnitude of the final feature map. At the third stage, we perform the connected component (CC) analysis, which is to connect the same-labeled neighboring pixels into a blob, and find the largest blob O, which is considered the pupil area. At the last stage, we find the center of the largest blob as a pupil center”;
“wherein training the multitask machine learning model comprises simultaneously
training the first and second branches” (Han, Page 64743, left column, A. Network Architecture, wherein it is disclosed that UNet for pupil segmentation is used , which can simultaneously perform the classification and localization). Although Han discloses obtaining the center of the pupil using existing CNN-based pupil detection (Han, Page 64740, right column, last paragraph), Han does not explicitly recite “ a second branch for performing a pupil-center-position regression task”. However, in an analogous field of endeavor, Gebauer discloses “the first neural network is configured to: solve a first regression problem to identify the initial pupil center” (Gebauer, Claim 17).
Accordingly, before the effective date of the instant application, it would have been obvious to one of ordinary skill in the art to combine Han with the teachings of Gebauer to perform a pupil-center-position regression task to find the center of the pupil in the image of an eye. One of ordinary skill in the art could have substituted the regression algorithm taught by Gebauer for the calculation method of Han, and the results would have been predictable as disclosed in the Han Abstract (“The conventional deep learning-based method for this problem is to train a convolutional neural network (CNN), which takes the eye image as the input and gives the pupil center as a regression result”. Therefore, it would have been obvious to combine Han and Gebauer to obtain the invention in Claim 1. In addition, with respect to the additional elements recited in independent Claims 8 and 15, Gebauer discloses one or more processors and a computer-readable storage medium for its device (Gebauer, Paragraph [0003]).
Consider Claim 2 (and similarly Claims 9 and 16), the combination of Han and Gebauer discloses “The method of claim 1, wherein the multitask machine learning model comprises a modified U-net” (Han, Abstract and Page 64740, right column, 1st paragraph).
Consider Claim 3 (and similarly Claims 10 and 17), the combination of Han and Gebauer discloses “The method of claim 1, wherein obtaining the labeled pupil images comprises: obtaining, from an external pupil image database, pupil images with pupil-center-position labels; and annotating the pupil images by adding segmentation labels” (Han, Abstract discloses “For the training, we make a new dataset of 111,581 images with hand-labeled pupil regions from 29 IR eye video sequences. We also label commonly used datasets (ExCuSe and ElSe dataset) that are considered real-world noisy ones to validate our method).
Allowable Subject Matter
Claims 4-7, 11-14, and 18-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims;. The following is a statement of reasons for the indication of allowable subject matter: consider Claims 4, 11, and 18, none of the cited prior art references, alone or in combination, provides a motivation to teach the ordered combination “wherein training the multitask machine learning model comprises computing a unified loss function that includes a segmentation loss function associated with the pupil-region segmentation task and a regression loss function associated with the pupil-center-position regression task.” Claims 5-7, 12-14, and 19-20 are dependent from Claims 4, 11, and 18, respectively; and therefore, they include the above-referenced allowable subject matter.
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.”
Conclusion and Contact Information
The prior art made of record and not relied upon is considered pertinent to Applicant’s disclosure: Chen et al. (CN 113688675 A – English Machine Translation is attached hereto).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Siamak HARANDI whose telephone number is (571)270-1832. The examiner can normally be reached Monday - Friday 9:30 - 6:00 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Siamak Harandi/Primary Examiner, Art Unit 2662