Prosecution Insights
Last updated: April 18, 2026
Application No. 18/527,036

MULTI-MODAL FACIAL FEATURE EXTRACTION USING BRANCHED MACHINE LEARNING MODELS

Final Rejection §103
Filed
Dec 01, 2023
Examiner
RUSH, ERIC
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
2 (Final)
61%
Grant Probability
Moderate
3-4
OA Rounds
3y 5m
To Grant
97%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
383 granted / 628 resolved
-1.0% vs TC avg
Strong +36% interview lift
Without
With
+36.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
32 currently pending
Career history
660
Total Applications
across all art units

Statute-Specific Performance

§101
10.8%
-29.2% vs TC avg
§103
40.0%
+0.0% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
27.7%
-12.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 628 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This action is responsive to the amendments and remarks received 10 February 2026. Claims 1 - 20 are currently pending. Response to Arguments Applicant’s arguments with respect to claims 1 - 20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1, 4, 5, 7, 8, 11, 12, 14, 15, 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Dong U.S. Publication No. 2022/0207260 A1 in view of Liu et al. U.S. Publication No. 2021/0142164 A1 in view of Lin et al. U.S. Publication No. 2020/0349464 A1. - With regards to claim 1, Dong discloses a method for using a machine learning model comprising a base model and multiple branch models, (Dong, Abstract, Figs. 4B - 6, Pg. 1 ¶ 0002 and 0004 - 0005, Pg. 3 ¶ 0037 - Pg. 4 ¶ 0039, Pg. 4 ¶ 0046, Pg. 5 ¶ 0049 - 0053, Pg. 6 ¶ 0056 - 0057) the method comprising: obtaining an image containing a face; (Dong, Figs. 1A - 5 & 7 - 9B, Pg. 1 ¶ 0002 - 0005, Pg. 2 ¶ 0022, Pg. 3 ¶ 0036 - 0038, Pg. 5 ¶ 0049 - 0052, Pg. 7 ¶ 0073) processing the image using the base model to generate intermediate data based on the image; (Dong, Figs. 4B - 6, Pg. 1 ¶ 0004 - 0005, Pg. 2 ¶ 0023, Pg. 4 ¶ 0039 and 0046, Pg. 5 ¶ 0049 - 0050 and 0053, Pg. 6 ¶ 0055 - 0057) processing the intermediate data using a first branch model of the multiple branch models to perform a first image processing task, the first image processing task associated with analyzing the image containing the face; (Dong, Figs. 4B - 6, Pg. 4 ¶ 0039 and 0046, Pg. 5 ¶ 0049 - 0053, Pg. 6 ¶ 0055 - 0057, Pg. 7 ¶ 0070 - 0073 [“the output from the common DNN backbone of the DNN is supplied further to a branch of the DNN trained for facial feature extraction…”]) and processing the intermediate data using a second branch model of the multiple branch models to perform a second image processing task different from the first image processing task, the second image processing task associated with analyzing the image containing the face; (Dong, Figs. 4B - 6, Pg. 4 ¶ 0039 and 0046, Pg. 5 ¶ 0049 - 0053, Pg. 6 ¶ 0055 - 0057, Pg. 7 ¶ 0070 - 0073 [“the output from the common DNN backbone of the DNN is supplied further to… and another branch of the DNN for quality score prediction”]) wherein the base model and the first branch model are trained using a first dataset (Dong, Abstract, Fig. 9A, Pg. 1 ¶ 0004, Pg. 2 ¶ 0022, Pg. 4 ¶ 0039 and 0046, Pg. 6 ¶ 0054 and 0056, Pg. 7 ¶ 0070) and the second branch model is trained using a second dataset different from the first dataset; (Dong, Abstract, Figs. 7 - 9A, Pg. 1 ¶ 0004, Pg. 2 ¶ 0022, Pg. 4 ¶ 0039, Pg. 4 ¶ 0044 - Pg. 5 ¶ 0047, Pg. 5 ¶ 0049, Pg. 6 ¶ 0054, 0056 and 0058 - 0065, Pg. 7 ¶ 0070 - 0071) and wherein the first dataset and the second dataset have different annotation techniques. (Dong, Figs. 7 - 9A, Pg. 1 ¶ 0004, Pg. 4 ¶ 0039 - 0041, Pg. 4 ¶ 0044 - Pg. 5 ¶ 0047, Pg. 5 ¶ 0049 - 0051, Pg. 6 ¶ 0054 - 0058, Pg. 7 ¶ 0069 - 0073) Dong fails to disclose explicitly wherein the base model is trained using the second dataset different from the first dataset; and wherein the datasets have different pixel-wise annotation techniques. Pertaining to analogous art, Liu et al. disclose a method for using a machine learning model comprising a base model and multiple branch models, (Liu et al., Abstract, Figs. 2 - 4, Pg. 3 ¶ 0029 - 0031 and 0037, Pg. 4 ¶ 0040 - 0043, Pg. 5 ¶ 0056 - 0060) the method comprising: processing input data using the base model to generate intermediate data based on input data; (Liu et al., Abstract, Figs. 2 - 4, Pg. 3 ¶ 0029 - 0030 and 0037, Pg. 4 ¶ 0040 - 0043, Pg. 6 ¶ 0064 - 0067) processing the intermediate data using a first branch model of the multiple branch models to perform a first task, the first task associated with analyzing the input data; (Liu et al., Abstract, Figs. 2 - 4, Pg. 3 ¶ 0029 - 0030 and 0037 - 0038, Pg. 4 ¶ 0040 - 0043, Pg. 5 ¶ 0060, Pg. 6 ¶ 0064 - 0067, Pg. 7 ¶ 0079) and processing the intermediate data using a second branch model of the multiple branch models to perform a second task different from the first task, the second task associated with analyzing the input data; (Liu et al., Abstract, Figs. 2 - 4, Pg. 3 ¶ 0029 - 0030 and 0037 - 0038, Pg. 4 ¶ 0040 - 0043, Pg. 5 ¶ 0060, Pg. 6 ¶ 0064 - 0067, Pg. 7 ¶ 0079) wherein the base model and the first branch model are trained using a first dataset (Liu et al., Figs. 3 & 6, Pg. 4 ¶ 0041 - 0046, Pg. 5 ¶ 0057 - 0060, Pg. 6 ¶ 0062 - 0069, Pg. 7 ¶ 0079) and the base model and the second branch model are trained using a second dataset different from the first dataset. (Liu et al., Figs. 3 & 6, Pg. 4 ¶ 0041 - 0046, Pg. 5 ¶ 0057 - 0060, Pg. 6 ¶ 0062 - 0069, Pg. 7 ¶ 0079) Liu et al. fail to disclose explicitly wherein the datasets have different pixel-wise annotation techniques. Pertaining to analogous art, Lin et al. disclose using a first branch model of the multiple branch models to perform a first image processing task; (Lin et al., Abstract, Figs. 3 - 5A, 5C - 5F & 10, Pg. 1 ¶ 0006 - 0009, Pg. 3 ¶ 0029 - 0032, Pg. 7 ¶ 0056 and 0058, Pg. 8 ¶ 0065 - 0066 and 0069 - 0070, Pg. 9 ¶ 0072 - 0076, Pg. 12 ¶ 0092 - 0094, Pg. 13 ¶ 0099 - 0100, Pg. 17 ¶ 0123 - 0127) and using a second branch model of the multiple branch models to perform a second image processing task different from the first image processing task; (Lin et al., Abstract, Figs. 3 - 5A, 5C - 5F & 10, Pg. 1 ¶ 0006 - 0009, Pg. 3 ¶ 0029 - 0032, Pg. 7 ¶ 0056 and 0058, Pg. 8 ¶ 0065 - 0066 and 0069 - 0070, Pg. 9 ¶ 0072 - 0076, Pg. 12 ¶ 0092 - 0094, Pg. 13 ¶ 0099 - 0100, Pg. 17 ¶ 0123 - 0127) wherein first branch model is trained using a first dataset (Lin et al., Abstract, Pg. 1 ¶ 0006 - 0009, Pg. 3 ¶ 0031 - 0032 and 0035, Pg. 4 ¶ 0035 - 0038 and 0041, Pg. 8 ¶ 0069 - 0070, Pg. 11 ¶ 0089, Pg. 12 ¶ 0092 - Pg. 13 ¶ 0095, Pg. 13 ¶ 0099 - 0100, Pg. 14 ¶ 0103 - 0104, Pg. 17 ¶ 0123 - 0127) and the second branch model is trained using a second dataset different from the first dataset; (Lin et al., Abstract, Pg. 1 ¶ 0006 - 0009, Pg. 3 ¶ 0031 - 0032 and 0035, Pg. 4 ¶ 0035 - 0038 and 0041, Pg. 8 ¶ 0069 - 0070, Pg. 11 ¶ 0089, Pg. 12 ¶ 0092 - Pg. 13 ¶ 0095, Pg. 13 ¶ 0099 - 0100, Pg. 14 ¶ 0103 - 0104, Pg. 17 ¶ 0123 - 0127) and wherein the first dataset and the second dataset have different pixel-wise annotation techniques. (Lin et al., Fig. 3, Pg. 3 ¶ 0029 - 0031, Pg. 4 ¶ 0037 - 0038, Pg. 6 ¶ 0049, Pg. 7 ¶ 0056 and 0058, Pg. 9 ¶ 0072 - 0075, Pg. 12 ¶ 0092 - 0094, Pg. 13 ¶ 0099 - 0100, Pg. 16 ¶ 0120) Dong and Liu et al. are combinable because they are both directed towards systems and methods that utilize multi-branch neural network models to perform multiple different functions/tasks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Dong with the teachings of Liu et al. This modification would have been prompted in order to enhance the base device of Dong with the well-known and applicable technique Liu et al. applied to a similar device. Training the base model using both the first and second datasets, as taught by Liu et al., would enhance the base device of Dong by improving the ability of its machine learning model to learn a more general and universal face image representation that is good for use in performing multiple different functions/tasks as well as by helping prevent the machine learning model of the base device of Dong from overfitting to a specific function/task of the multiple different functions/tasks it performs, as taught and suggested by Liu et al., see at least page 3 paragraphs 0029 - 0031 and 0037 - 0038 of Liu et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that both the first and second datasets would be utilized to train the base model of the machine learning model of the base device of Dong so as to improve the ability of the machine learning model to learn a more general and universal face image representation that is good for performing multiple different functions/tasks and to help prevent the machine learning model from overfitting to a specific function/task of the multiple different functions/tasks it performs. In addition, Dong in view of Liu et al. and Lin et al. are combinable because they are all directed towards systems and methods that utilize multi-branch neural network models to perform multiple different functions/tasks and, similar to Dong, Lin et al. is also directed towards processing images to perform various different image processing tasks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Dong in view of Liu et al. with the teachings of Lin et al. This modification would have been prompted in order to enhance the combined base device of Dong in view of Liu et al. with the well-known and applicable technique Lin et al. applied to a comparable device. Utilizing different datasets having different pixel-wise annotation techniques to train the machine learning model, as taught by Lin et al., would enhance the combined base device by improving its ability to accurately and reliably predict the suitability of images for face/object recognition, extract features from images for face/object recognition and perform face/object recognition since the pixel-wise annotation techniques would allow for the machine learning model to learn from specific sub-regions of interest in the training images instead of or in addition to the training images as a whole. Furthermore, this modification would enhance the combined base device by enabling the machine learning model to perform additional related image processing tasks with respect to input face images thereby allowing for the combined base device to be utilized in various additional image processing applications and increasing its overall appeal and usefulness to potential end-users. Moreover, this modification would have been prompted by the teachings and suggestions of Lin et al. that training a machine learning model to perform a target task(s) by utilizing data form multiple different datasets that are related to different tasks improves the performance of the machine learning model particularly in situations where a dataset to train the machine learning model to perform the target task(s) is insufficient, see at least page 1 paragraphs 0006 - 0007, page 2 paragraph 0028 - page 3 paragraph 0032, page 4 paragraphs 0036 - 0038 and page 16 paragraph 0118 of Lin et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that different datasets having different pixel-wise annotation techniques would be utilized to train the machine learning model of the combined base device so as to allow for it to learn based on specific sub-regions of interest in the training images instead of or in addition to the training images as a whole thereby enhancing its ability to accurately and reliably perform its intended functions. Therefore, it would have been obvious to combine Dong with Liu et al. and Lin et al. to obtain the invention as specified in claim 1. - With regards to claim 4, Dong in view of Liu et al. in view of Lin et al. disclose the method of Claim 1, further comprising: processing the intermediate data using a third branch model of the multiple branch models to perform a third image processing task different from the first and second image processing tasks. (Dong, Figs. 1A - 3, Pg. 3 ¶ 0037 - Pg. 4 ¶ 0040, Pg. 4 ¶ 0043 - 0046, Pg. 5 ¶ 0050 - 0052, Pg. 6 ¶ 0055 - 0057, Pg. 7 ¶ 0073 [“face recognition module 210 is operable to make use of the features extracted by the facial feature extraction module 208 to recognize the face present in the image. In an embodiment, the layers of the DNN can be used to match the face present in the image with target faces available in an image database. In an embodiment, other machine learning models and image matching models may be used for matching the extracted features to recognize the face present in the image” and “face recognition at block 308”]) In addition, analogous art Liu et al. disclose processing the intermediate data using a third branch model of the multiple branch models to perform a third task different from the first and second tasks. (Liu et al., Figs. 2 - 4, Pg. 2 ¶ 0019, Pg. 3 ¶ 0029 - 0030 and 0037, Pg. 4 ¶ 0040 - 0043 and 0045 - 0046, Pg. 5 ¶ 0060, Pg. 7 ¶ 0079) - With regards to claim 5, Dong in view of Liu et al. in view of Lin et al. disclose the method of Claim 1, further comprising: providing outputs of the first branch model to an additional branch model of the machine learning model, the additional branch model configured to perform an additional image processing task. (Dong, Figs. 1A - 3, Pg. 3 ¶ 0037 - Pg. 4 ¶ 0040, Pg. 4 ¶ 0043 - 0046, Pg. 5 ¶ 0050 - 0052, Pg. 6 ¶ 0055 - 0057, Pg. 7 ¶ 0073 [“face recognition module 210 is operable to make use of the features extracted by the facial feature extraction module 208 to recognize the face present in the image. In an embodiment, the layers of the DNN can be used to match the face present in the image with target faces available in an image database. In an embodiment, other machine learning models and image matching models may be used for matching the extracted features to recognize the face present in the image” and “face recognition at block 308”]) - With regards to claim 7, Dong in view of Liu et al. in view of Lin et al. disclose the method of Claim 1, wherein the intermediate data generated by the base model comprises a high-dimensional latent representation of the face in the image, (Dong, Fig. 6, Pg. 1 ¶ 0003, Pg. 4 ¶ 0039 and 0046, Pg. 5 ¶ 0049 - 0053, Pg. 6 ¶ 0055 - 0057 [“FIG. 6 illustrates an example of prelogits 604 used for facial feature extraction and quality prediction in accordance with an embodiment of the present disclosure. In an example implementation, an Inception ResNet v1 backbone 602 can be used to implement the common DNN backbone (e.g., DNN backbone 452 or DNN 502). As shown in FIG. 6, the backbone 602 may generate prelogits, which can be further used by FC1 606 designed to extract a facial feature embedding and FC2 608 designed to predict a quality score. According to one embodiment, prelogits 604 output represents a fully connected layer”]) the high-dimensional latent representation containing different discriminative information used by different ones of the multiple branch models. (Dong, Fig. 6, Pg. 1 ¶ 0003, Pg. 4 ¶ 0039, 0041 and 0046, Pg. 5 ¶ 0049 - 0053, Pg. 6 ¶ 0055 - 0057, Pg. 7 ¶ 0070 - 0073) - With regards to claim 8, Dong discloses an electronic device (Dong, Abstract, Figs. 1A - 2 & 10, Pg. 2 ¶ 0025 - 0027, Pg. 7 ¶ 0068 and 0074 - 0075, Pg. 8 ¶ 0077 - 0081) comprising: an imaging sensor (Dong, Figs. 1A - 1B, Pg. 3 ¶ 0034 and 0036 - 0037, Pg. 4 ¶ 0040 and 0042 - 0043, Pg. 5 ¶ 0052, Pg. 10 ¶ 0074) configured to capture an image containing a face; (Dong, Figs. 1A - 5 & 7 - 9B, Pg. 1 ¶ 0002 - 0005, Pg. 2 ¶ 0022, Pg. 3 ¶ 0036 - 0038, Pg. 5 ¶ 0049 - 0052, Pg. 7 ¶ 0073) and at least one processor (Dong, Figs. 1A - 1B & 10, Pg. 2 ¶ 0025 - 0027, Pg. 4 ¶ 0041 - 0042, Pg. 7 ¶ 0068 and 0074 - 0075, Pg. 8 ¶ 0077 - 0081) configured to: process the image using a base model of a machine learning model to generate intermediate data based on the image; (Dong, Figs. 4B - 6, Pg. 1 ¶ 0004 - 0005, Pg. 2 ¶ 0023, Pg. 4 ¶ 0039 and 0046, Pg. 5 ¶ 0049 - 0050 and 0053, Pg. 6 ¶ 0055 - 0057) process the intermediate data using a first branch model of multiple branch models of the machine learning model to perform a first image processing task, the first image processing task associated with analyzing the image containing the face; (Dong, Figs. 4B - 6, Pg. 4 ¶ 0039 and 0046, Pg. 5 ¶ 0049 - 0053, Pg. 6 ¶ 0055 - 0057, Pg. 7 ¶ 0070 - 0073 [“the output from the common DNN backbone of the DNN is supplied further to a branch of the DNN trained for facial feature extraction…”]) and process the intermediate data using a second branch model of the multiple branch models to perform a second image processing task different from the first image processing task, the second image processing task associated with analyzing the image containing the face; (Dong, Figs. 4B - 6, Pg. 4 ¶ 0039 and 0046, Pg. 5 ¶ 0049 - 0053, Pg. 6 ¶ 0055 - 0057, Pg. 7 ¶ 0070 - 0073 [“the output from the common DNN backbone of the DNN is supplied further to… and another branch of the DNN for quality score prediction”]) wherein the base model and the first branch model are trained using a first dataset (Dong, Abstract, Fig. 9A, Pg. 1 ¶ 0004, Pg. 2 ¶ 0022, Pg. 4 ¶ 0039 and 0046, Pg. 6 ¶ 0054 and 0056, Pg. 7 ¶ 0070) and the second branch model is trained using a second dataset different from the first dataset; (Dong, Abstract, Figs. 7 - 9A, Pg. 1 ¶ 0004, Pg. 2 ¶ 0022, Pg. 4 ¶ 0039, Pg. 4 ¶ 0044 - Pg. 5 ¶ 0047, Pg. 5 ¶ 0049, Pg. 6 ¶ 0054, 0056 and 0058 - 0065, Pg. 7 ¶ 0070 - 0071) and wherein the first dataset and the second dataset have different annotation techniques. (Dong, Figs. 7 - 9A, Pg. 1 ¶ 0004, Pg. 4 ¶ 0039 - 0041, Pg. 4 ¶ 0044 - Pg. 5 ¶ 0047, Pg. 5 ¶ 0049 - 0051, Pg. 6 ¶ 0054 - 0058, Pg. 7 ¶ 0069 - 0073) Dong fails to disclose explicitly wherein the base model is trained using a second dataset different from the first dataset; and wherein the datasets have different pixel-wise annotation techniques. Pertaining to analogous art, Liu et al. disclose processing input data using a base model of a machine learning model to generate intermediate data based on the input data; (Liu et al., Abstract, Figs. 2 - 4, Pg. 3 ¶ 0029 - 0030 and 0037, Pg. 4 ¶ 0040 - 0043, Pg. 6 ¶ 0064 - 0067) processing the intermediate data using a first branch model of multiple branch models of the machine learning model to perform a first task, the first task associated with analyzing the input data; (Liu et al., Abstract, Figs. 2 - 4, Pg. 3 ¶ 0029 - 0030 and 0037 - 0038, Pg. 4 ¶ 0040 - 0043, Pg. 5 ¶ 0060, Pg. 6 ¶ 0064 - 0067, Pg. 7 ¶ 0079) and processing the intermediate data using a second branch model of the multiple branch models to perform a second task different from the first task, the second task associated with analyzing the input data; (Liu et al., Abstract, Figs. 2 - 4, Pg. 3 ¶ 0029 - 0030 and 0037 - 0038, Pg. 4 ¶ 0040 - 0043, Pg. 5 ¶ 0060, Pg. 6 ¶ 0064 - 0067, Pg. 7 ¶ 0079) wherein the base model and the first branch model are trained using a first dataset (Liu et al., Figs. 3 & 6, Pg. 4 ¶ 0041 - 0046, Pg. 5 ¶ 0057 - 0060, Pg. 6 ¶ 0062 - 0069, Pg. 7 ¶ 0079) and the base model and the second branch model are trained using a second dataset different from the first dataset. (Liu et al., Figs. 3 & 6, Pg. 4 ¶ 0041 - 0046, Pg. 5 ¶ 0057 - 0060, Pg. 6 ¶ 0062 - 0069, Pg. 7 ¶ 0079) Liu et al. fail to disclose explicitly wherein the datasets have different pixel-wise annotation techniques. Pertaining to analogous art, Lin et al. disclose using a first branch model of the multiple branch models to perform a first image processing task; (Lin et al., Abstract, Figs. 3 - 5A, 5C - 5F & 10, Pg. 1 ¶ 0006 - 0009, Pg. 3 ¶ 0029 - 0032, Pg. 7 ¶ 0056 and 0058, Pg. 8 ¶ 0065 - 0066 and 0069 - 0070, Pg. 9 ¶ 0072 - 0076, Pg. 12 ¶ 0092 - 0094, Pg. 13 ¶ 0099 - 0100, Pg. 17 ¶ 0123 - 0127) and using a second branch model of the multiple branch models to perform a second image processing task different from the first image processing task; (Lin et al., Abstract, Figs. 3 - 5A, 5C - 5F & 10, Pg. 1 ¶ 0006 - 0009, Pg. 3 ¶ 0029 - 0032, Pg. 7 ¶ 0056 and 0058, Pg. 8 ¶ 0065 - 0066 and 0069 - 0070, Pg. 9 ¶ 0072 - 0076, Pg. 12 ¶ 0092 - 0094, Pg. 13 ¶ 0099 - 0100, Pg. 17 ¶ 0123 - 0127) wherein first branch model is trained using a first dataset (Lin et al., Abstract, Pg. 1 ¶ 0006 - 0009, Pg. 3 ¶ 0031 - 0032 and 0035, Pg. 4 ¶ 0035 - 0038 and 0041, Pg. 8 ¶ 0069 - 0070, Pg. 11 ¶ 0089, Pg. 12 ¶ 0092 - Pg. 13 ¶ 0095, Pg. 13 ¶ 0099 - 0100, Pg. 14 ¶ 0103 - 0104, Pg. 17 ¶ 0123 - 0127) and the second branch model is trained using a second dataset different from the first dataset; (Lin et al., Abstract, Pg. 1 ¶ 0006 - 0009, Pg. 3 ¶ 0031 - 0032 and 0035, Pg. 4 ¶ 0035 - 0038 and 0041, Pg. 8 ¶ 0069 - 0070, Pg. 11 ¶ 0089, Pg. 12 ¶ 0092 - Pg. 13 ¶ 0095, Pg. 13 ¶ 0099 - 0100, Pg. 14 ¶ 0103 - 0104, Pg. 17 ¶ 0123 - 0127) and wherein the first dataset and the second dataset have different pixel-wise annotation techniques. (Lin et al., Fig. 3, Pg. 3 ¶ 0029 - 0031, Pg. 4 ¶ 0037 - 0038, Pg. 6 ¶ 0049, Pg. 7 ¶ 0056 and 0058, Pg. 9 ¶ 0072 - 0075, Pg. 12 ¶ 0092 - 0094, Pg. 13 ¶ 0099 - 0100, Pg. 16 ¶ 0120) Dong and Liu et al. are combinable because they are both directed towards systems and methods that utilize multi-branch neural network models to perform multiple different functions/tasks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Dong with the teachings of Liu et al. This modification would have been prompted in order to enhance the base device of Dong with the well-known and applicable technique Liu et al. applied to a similar device. Training the base model using both the first and second datasets, as taught by Liu et al., would enhance the base device of Dong by improving the ability of its machine learning model to learn a more general and universal face image representation that is good for use in performing multiple different functions/tasks as well as by helping prevent the machine learning model of the base device of Dong from overfitting to a specific function/task of the multiple different functions/tasks it performs, as taught and suggested by Liu et al., see at least page 3 paragraphs 0029 - 0031 and 0037 - 0038 of Liu et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that both the first and second datasets would be utilized to train the base model of the machine learning model of the base device of Dong so as to improve the ability of the machine learning model to learn a more general and universal face image representation that is good for performing multiple different functions/tasks and to help prevent the machine learning model from overfitting to a specific function/task of the multiple different functions/tasks it performs. In addition, Dong in view of Liu et al. and Lin et al. are combinable because they are all directed towards systems and methods that utilize multi-branch neural network models to perform multiple different functions/tasks and, similar to Dong, Lin et al. is also directed towards processing images to perform various different image processing tasks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Dong in view of Liu et al. with the teachings of Lin et al. This modification would have been prompted in order to enhance the combined base device of Dong in view of Liu et al. with the well-known and applicable technique Lin et al. applied to a comparable device. Utilizing different datasets having different pixel-wise annotation techniques to train the machine learning model, as taught by Lin et al., would enhance the combined base device by improving its ability to accurately and reliably predict the suitability of images for face/object recognition, extract features from images for face/object recognition and perform face/object recognition since the pixel-wise annotation techniques would allow for the machine learning model to learn from specific sub-regions of interest in the training images instead of or in addition to the training images as a whole. Furthermore, this modification would enhance the combined base device by enabling the machine learning model to perform additional related image processing tasks with respect to input face images thereby allowing for the combined base device to be utilized in various additional image processing applications and increasing its overall appeal and usefulness to potential end-users. Moreover, this modification would have been prompted by the teachings and suggestions of Lin et al. that training a machine learning model to perform a target task(s) by utilizing data form multiple different datasets that are related to different tasks improves the performance of the machine learning model particularly in situations where a dataset to train the machine learning model to perform the target task(s) is insufficient, see at least page 1 paragraphs 0006 - 0007, page 2 paragraph 0028 - page 3 paragraph 0032, page 4 paragraphs 0036 - 0038 and page 16 paragraph 0118 of Lin et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that different datasets having different pixel-wise annotation techniques would be utilized to train the machine learning model of the combined base device so as to allow for it to learn based on specific sub-regions of interest in the training images instead of or in addition to the training images as a whole thereby enhancing its ability to accurately and reliably perform its intended functions. Therefore, it would have been obvious to combine Dong with Liu et al. and Lin et al. to obtain the invention as specified in claim 8. - With regards to claim 11, Dong in view of Liu et al. in view of Lin et al. disclose the electronic device of Claim 8, wherein the at least one processor is further configured to process the intermediate data using a third branch model of the multiple branch models to perform a third image processing task different from the first and second image processing tasks. (Dong, Figs. 1A - 3, Pg. 3 ¶ 0037 - Pg. 4 ¶ 0040, Pg. 4 ¶ 0043 - 0046, Pg. 5 ¶ 0050 - 0052, Pg. 6 ¶ 0055 - 0057, Pg. 7 ¶ 0073 [“face recognition module 210 is operable to make use of the features extracted by the facial feature extraction module 208 to recognize the face present in the image. In an embodiment, the layers of the DNN can be used to match the face present in the image with target faces available in an image database. In an embodiment, other machine learning models and image matching models may be used for matching the extracted features to recognize the face present in the image” and “face recognition at block 308”]) In addition, analogous art Liu et al. disclose wherein the at least one processor is further configured to process the intermediate data using a third branch model of the multiple branch models to perform a third task different from the first and second tasks. (Liu et al., Figs. 2 - 4, Pg. 2 ¶ 0019, Pg. 3 ¶ 0029 - 0030 and 0037, Pg. 4 ¶ 0040 - 0043 and 0045 - 0046, Pg. 5 ¶ 0060, Pg. 7 ¶ 0079) - With regards to claim 12, Dong in view of Liu et al. in view of Lin et al. disclose the electronic device of Claim 8, wherein the at least one processor is further configured to provide outputs of the first branch model to an additional branch model of the machine learning model, the additional branch model configured to perform an additional image processing task. (Dong, Figs. 1A - 3, Pg. 3 ¶ 0037 - Pg. 4 ¶ 0040, Pg. 4 ¶ 0043 - 0046, Pg. 5 ¶ 0050 - 0052, Pg. 6 ¶ 0055 - 0057, Pg. 7 ¶ 0073 [“face recognition module 210 is operable to make use of the features extracted by the facial feature extraction module 208 to recognize the face present in the image. In an embodiment, the layers of the DNN can be used to match the face present in the image with target faces available in an image database. In an embodiment, other machine learning models and image matching models may be used for matching the extracted features to recognize the face present in the image” and “face recognition at block 308”]) - With regards to claim 14, Dong in view of Liu et al. in view of Lin et al. disclose the electronic device of Claim 8, wherein the intermediate data generated by the base model comprises a high-dimensional latent representation of the face in the image, (Dong, Fig. 6, Pg. 1 ¶ 0003, Pg. 4 ¶ 0039 and 0046, Pg. 5 ¶ 0049 - 0053, Pg. 6 ¶ 0055 - 0057 [“FIG. 6 illustrates an example of prelogits 604 used for facial feature extraction and quality prediction in accordance with an embodiment of the present disclosure. In an example implementation, an Inception ResNet v1 backbone 602 can be used to implement the common DNN backbone (e.g., DNN backbone 452 or DNN 502). As shown in FIG. 6, the backbone 602 may generate prelogits, which can be further used by FC1 606 designed to extract a facial feature embedding and FC2 608 designed to predict a quality score. According to one embodiment, prelogits 604 output represents a fully connected layer”]) the high-dimensional latent representation containing different discriminative information used by different ones of the multiple branch models. (Dong, Fig. 6, Pg. 1 ¶ 0003, Pg. 4 ¶ 0039, 0041 and 0046, Pg. 5 ¶ 0049 - 0053, Pg. 6 ¶ 0055 - 0057, Pg. 7 ¶ 0070 - 0073) - With regards to claim 15, Dong discloses a method for training a machine learning model comprising a base model and multiple branch models, (Dong, Abstract, Figs. 4B - 6 & 9A, Pg. 1 ¶ 0002 and 0004 - 0005, Pg. 3 ¶ 0037 - Pg. 4 ¶ 0039, Pg. 4 ¶ 0041 and 0045 - 0046, Pg. 5 ¶ 0049 - Pg. 6 ¶ 0058, Pg. 7 ¶ 0069 - 0073) the method comprising: obtaining a first dataset for training a first branch model of the multiple branch models to perform a first image processing task, (Dong, Abstract, Fig. 9A, Pg. 1 ¶ 0004, Pg. 2 ¶ 0022, Pg. 4 ¶ 0039 and 0046, Pg. 6 ¶ 0054 and 0056, Pg. 7 ¶ 0070) the first image processing task associated with analyzing images containing faces; (Dong, Figs. 4B - 6, Pg. 4 ¶ 0039 and 0046, Pg. 5 ¶ 0049 - 0053, Pg. 6 ¶ 0055 - 0057, Pg. 7 ¶ 0070 - 0073 [“the output from the common DNN backbone of the DNN is supplied further to a branch of the DNN trained for facial feature extraction…”]) obtaining a second dataset for training a second branch model of the multiple branch models to perform a second image processing task different from the first image processing task, (Dong, Abstract, Figs. 7 - 9A, Pg. 1 ¶ 0004, Pg. 2 ¶ 0022, Pg. 4 ¶ 0039, Pg. 4 ¶ 0044 - Pg. 5 ¶ 0047, Pg. 5 ¶ 0049, Pg. 6 ¶ 0054, 0056 and 0058 - 0065, Pg. 7 ¶ 0070 - 0071) the second image processing task associated with analyzing the images containing the faces, (Dong, Figs. 4B - 6, Pg. 4 ¶ 0039 and 0046, Pg. 5 ¶ 0049 - 0053, Pg. 6 ¶ 0055 - 0057, Pg. 7 ¶ 0070 - 0073 [“the output from the common DNN backbone of the DNN is supplied further to… and another branch of the DNN for quality score prediction”]) the second dataset different from the first dataset; (Dong, Abstract, Figs. 7 - 9A, Pg. 1 ¶ 0004, Pg. 2 ¶ 0022, Pg. 4 ¶ 0039, Pg. 4 ¶ 0044 - Pg. 5 ¶ 0047, Pg. 5 ¶ 0049, Pg. 6 ¶ 0054, 0056 and 0058 - 0065, Pg. 7 ¶ 0070 - 0071) and training the base model and the first branch model using the first dataset (Dong, Abstract, Fig. 9A, Pg. 1 ¶ 0004, Pg. 2 ¶ 0022, Pg. 4 ¶ 0039 and 0046, Pg. 6 ¶ 0054 and 0056, Pg. 7 ¶ 0070) and the second branch model using the second dataset; (Dong, Abstract, Figs. 7 - 9A, Pg. 1 ¶ 0004, Pg. 2 ¶ 0022, Pg. 4 ¶ 0039, Pg. 4 ¶ 0044 - Pg. 5 ¶ 0047, Pg. 5 ¶ 0049, Pg. 6 ¶ 0054, 0056 and 0058 - 0065, Pg. 7 ¶ 0070 - 0071) and wherein the first dataset and the second dataset have different annotation techniques. (Dong, Figs. 7 - 9A, Pg. 1 ¶ 0004, Pg. 4 ¶ 0039 - 0041, Pg. 4 ¶ 0044 - Pg. 5 ¶ 0047, Pg. 5 ¶ 0049 - 0051, Pg. 6 ¶ 0054 - 0058, Pg. 7 ¶ 0069 - 0073) Dong fails to disclose explicitly training the base model using the second dataset; and wherein the datasets have different pixel-wise annotation techniques. Pertaining to analogous art, Liu et al. disclose a method for training a machine learning model comprising a base model and multiple branch models, (Liu et al., Abstract, Figs. 2 - 4 & 6, Pg. 2 ¶ 0019, Pg. 3 ¶ 0029 - 0031 and 0037 - 0038, Pg. 4 ¶ 0040 - 0045, Pg. 5 ¶ 0057 - 0060, Pg. 6 ¶ 0064 - 0069, Pg. 7 ¶ 0079) the method comprising: obtaining a first dataset for training a first branch model of the multiple branch models (Liu et al., Figs. 3 & 6, Pg. 4 ¶ 0041 - 0045, Pg. 5 ¶ 0057 - 0060, Pg. 6 ¶ 0062 - 0069, Pg. 7 ¶ 0079 [“In the multi-task refining stage, all the parameters of the teacher model 200, including bottom shared layers 210a-c and top task-specific layers 220a-c, are updated through mini-batch based stochastic gradient descent… the training data are separated or packed into mini-batches, where each mini-batch only contains samples from one NLU task” and “Each top task-specific layer of the model (teacher model 200 or student model 400) performs or implements a different, respective natural language understanding (NLU) task. Referring to FIG. 6, T number of tasks t may be performed by the top task-specific layers. The data for the T tasks are into packed into batches. In some embodiments, the training samples are selected from each dataset and packed into task-specific batches”]) to perform a first task, the first task associated with analyzing input data; (Liu et al., Abstract, Figs. 2 - 4, Pg. 3 ¶ 0029 - 0030 and 0037 - 0038, Pg. 4 ¶ 0040 - 0043, Pg. 5 ¶ 0060, Pg. 6 ¶ 0064 - 0067, Pg. 7 ¶ 0079) obtaining a second dataset for training a second branch model of the multiple branch models (Liu et al., Figs. 3 & 6, Pg. 4 ¶ 0041 - 0045, Pg. 5 ¶ 0057 - 0060, Pg. 6 ¶ 0062 - 0069, Pg. 7 ¶ 0079 [“In the multi-task refining stage, all the parameters of the teacher model 200, including bottom shared layers 210a-c and top task-specific layers 220a-c, are updated through mini-batch based stochastic gradient descent… the training data are separated or packed into mini-batches, where each mini-batch only contains samples from one NLU task” and “Each top task-specific layer of the model (teacher model 200 or student model 400) performs or implements a different, respective natural language understanding (NLU) task. Referring to FIG. 6, T number of tasks t may be performed by the top task-specific layers. The data for the T tasks are into packed into batches. In some embodiments, the training samples are selected from each dataset and packed into task-specific batches”]) to perform a second task different from the first task, the second task associated with analyzing the input data, (Liu et al., Abstract, Figs. 2 - 4, Pg. 3 ¶ 0029 - 0030 and 0037 - 0038, Pg. 4 ¶ 0040 - 0043, Pg. 5 ¶ 0060, Pg. 6 ¶ 0064 - 0067, Pg. 7 ¶ 0079) the second dataset different from the first dataset; (Liu et al., Figs. 3 & 6, Pg. 4 ¶ 0041 - 0046, Pg. 5 ¶ 0057 - 0060, Pg. 6 ¶ 0062 - 0069, Pg. 7 ¶ 0079) and training the base model and the first branch model using the first dataset (Liu et al., Figs. 3 & 6, Pg. 4 ¶ 0041 - 0046, Pg. 5 ¶ 0057 - 0060, Pg. 6 ¶ 0062 - 0069, Pg. 7 ¶ 0079) and the base model and the second branch model using the second dataset. (Liu et al., Figs. 3 & 6, Pg. 4 ¶ 0041 - 0046, Pg. 5 ¶ 0057 - 0060, Pg. 6 ¶ 0062 - 0069, Pg. 7 ¶ 0079) Liu et al. fail to disclose explicitly wherein the datasets have different pixel-wise annotation techniques. Pertaining to analogous art, Lin et al. disclose obtaining a first dataset for training a first branch model of the multiple branch models to perform a first image processing task; (Lin et al., Abstract, Pg. 1 ¶ 0006 - 0009, Pg. 3 ¶ 0031 - 0032 and 0035, Pg. 4 ¶ 0035 - 0038 and 0041, Pg. 8 ¶ 0069 - 0070, Pg. 11 ¶ 0089, Pg. 12 ¶ 0092 - Pg. 13 ¶ 0095, Pg. 13 ¶ 0099 - 0100, Pg. 14 ¶ 0103 - 0104, Pg. 17 ¶ 0123 - 0127) obtaining a second dataset for training a second branch model of the multiple branch models to perform a second image processing task different from the first image processing task; (Lin et al., Abstract, Pg. 1 ¶ 0006 - 0009, Pg. 3 ¶ 0031 - 0032 and 0035, Pg. 4 ¶ 0035 - 0038 and 0041, Pg. 8 ¶ 0069 - 0070, Pg. 11 ¶ 0089, Pg. 12 ¶ 0092 - Pg. 13 ¶ 0095, Pg. 13 ¶ 0099 - 0100, Pg. 14 ¶ 0103 - 0104, Pg. 17 ¶ 0123 - 0127) and training the first branch model using the first dataset (Lin et al., Abstract, Pg. 1 ¶ 0006 - 0009, Pg. 3 ¶ 0031 - 0032 and 0035, Pg. 4 ¶ 0035 - 0038 and 0041, Pg. 8 ¶ 0069 - 0070, Pg. 11 ¶ 0089, Pg. 12 ¶ 0092 - Pg. 13 ¶ 0095, Pg. 13 ¶ 0099 - 0100, Pg. 14 ¶ 0103 - 0104, Pg. 17 ¶ 0123 - 0127) and the second branch model using the second dataset; (Lin et al., Abstract, Pg. 1 ¶ 0006 - 0009, Pg. 3 ¶ 0031 - 0032 and 0035, Pg. 4 ¶ 0035 - 0038 and 0041, Pg. 8 ¶ 0069 - 0070, Pg. 11 ¶ 0089, Pg. 12 ¶ 0092 - Pg. 13 ¶ 0095, Pg. 13 ¶ 0099 - 0100, Pg. 14 ¶ 0103 - 0104, Pg. 17 ¶ 0123 - 0127) wherein the first dataset and the second dataset have different pixel-wise annotation techniques. (Lin et al., Fig. 3, Pg. 3 ¶ 0029 - 0031, Pg. 4 ¶ 0037 - 0038, Pg. 6 ¶ 0049, Pg. 7 ¶ 0056 and 0058, Pg. 9 ¶ 0072 - 0075, Pg. 12 ¶ 0092 - 0094, Pg. 13 ¶ 0099 - 0100, Pg. 16 ¶ 0120) Dong and Liu et al. are combinable because they are both directed towards systems and methods that utilize multi-branch neural network models to perform multiple different functions/tasks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Dong with the teachings of Liu et al. This modification would have been prompted in order to enhance the base device of Dong with the well-known and applicable technique Liu et al. applied to a similar device. Training the base model using both the first and second datasets, as taught by Liu et al., would enhance the base device of Dong by improving the ability of its machine learning model to learn a more general and universal face image representation that is good for use in performing multiple different functions/tasks as well as by helping prevent the machine learning model of the base device of Dong from overfitting to a specific function/task of the multiple different functions/tasks it performs, as taught and suggested by Liu et al., see at least page 3 paragraphs 0029 - 0031 and 0037 - 0038 of Liu et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that both the first and second datasets would be utilized to train the base model of the machine learning model of the base device of Dong so as to improve the ability of the machine learning model to learn a more general and universal face image representation that is good for performing multiple different functions/tasks and to help prevent the machine learning model from overfitting to a specific function/task of the multiple different functions/tasks it performs. In addition, Dong in view of Liu et al. and Lin et al. are combinable because they are all directed towards systems and methods that utilize multi-branch neural network models to perform multiple different functions/tasks and, similar to Dong, Lin et al. is also directed towards processing images to perform various different image processing tasks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Dong in view of Liu et al. with the teachings of Lin et al. This modification would have been prompted in order to enhance the combined base device of Dong in view of Liu et al. with the well-known and applicable technique Lin et al. applied to a comparable device. Utilizing different datasets having different pixel-wise annotation techniques to train the machine learning model, as taught by Lin et al., would enhance the combined base device by improving its ability to accurately and reliably predict the suitability of images for face/object recognition, extract features from images for face/object recognition and perform face/object recognition since the pixel-wise annotation techniques would allow for the machine learning model to learn from specific sub-regions of interest in the training images instead of or in addition to the training images as a whole. Furthermore, this modification would enhance the combined base device by enabling the machine learning model to perform additional related image processing tasks with respect to input face images thereby allowing for the combined base device to be utilized in various additional image processing applications and increasing its overall appeal and usefulness to potential end-users. Moreover, this modification would have been prompted by the teachings and suggestions of Lin et al. that training a machine learning model to perform a target task(s) by utilizing data form multiple different datasets that are related to different tasks improves the performance of the machine learning model particularly in situations where a dataset to train the machine learning model to perform the target task(s) is insufficient, see at least page 1 paragraphs 0006 - 0007, page 2 paragraph 0028 - page 3 paragraph 0032, page 4 paragraphs 0036 - 0038 and page 16 paragraph 0118 of Lin et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that different datasets having different pixel-wise annotation techniques would be utilized to train the machine learning model of the combined base device so as to allow for it to learn based on specific sub-regions of interest in the training images instead of or in addition to the training images as a whole thereby enhancing its ability to accurately and reliably perform its intended functions. Therefore, it would have been obvious to combine Dong with Liu et al. and Lin et al. to obtain the invention as specified in claim 15. - With regards to claim 18, Dong in view of Liu et al. in view of Lin et al. disclose the method of Claim 15, further comprising: obtaining a third dataset for training a third branch model of the multiple branch models (Dong, Pg. 1 ¶ 0002, Pg. 4 ¶ 0039 - 0040 and 0044, Pg. 5 ¶ 0051 - 0052, Pg. 6 ¶ 0054 - 0058, Pg. 7 ¶ 0073) to perform a third image processing task different from the first and second image processing tasks; (Dong, Figs. 1A - 3, Pg. 3 ¶ 0037 - Pg. 4 ¶ 0040, Pg. 4 ¶ 0043 - 0046, Pg. 5 ¶ 0050 - 0052, Pg. 6 ¶ 0055 - 0057, Pg. 7 ¶ 0073 [“face recognition module 210 is operable to make use of the features extracted by the facial feature extraction module 208 to recognize the face present in the image. In an embodiment, the layers of the DNN can be used to match the face present in the image with target faces available in an image database. In an embodiment, other machine learning models and image matching models may be used for matching the extracted features to recognize the face present in the image” and “face recognition at block 308”]) and training the third branch model using the third dataset. (Dong, Figs. 1A - 3, Pg. 1 ¶ 0002, Pg. 3 ¶ 0037 - Pg. 4 ¶ 0040, Pg. 4 ¶ 0043 - 0046, Pg. 5 ¶ 0050 - 0052, Pg. 6 ¶ 0054 - 0058, Pg. 7 ¶ 0073) Dong fails to disclose explicitly the third dataset different from the first and second datasets; and training the base model using the third dataset. Pertaining to analogous art, Liu et al. disclose obtaining a third dataset for training a third branch model of the multiple branch models (Liu et al., Figs. 3 & 6, Pg. 4 ¶ 0041 - 0046, Pg. 5 ¶ 0057 - 0060, Pg. 6 ¶ 0062 - 0069, Pg. 7 ¶ 0079 [“In the multi-task refining stage, all the parameters of the teacher model 200, including bottom shared layers 210a-c and top task-specific layers 220a-c, are updated through mini-batch based stochastic gradient descent… the training data are separated or packed into mini-batches, where each mini-batch only contains samples from one NLU task” and “Each top task-specific layer of the model (teacher model 200 or student model 400) performs or implements a different, respective natural language understanding (NLU) task. Referring to FIG. 6, T number of tasks t may be performed by the top task-specific layers. The data for the T tasks are into packed into batches. In some embodiments, the training samples are selected from each dataset and packed into task-specific batches”]) to perform a third task different from the first and second, (Liu et al., Figs. 2 - 4, Pg. 2 ¶ 0019, Pg. 3 ¶ 0029 - 0030 and 0037, Pg. 4 ¶ 0040 - 0043 and 0045 - 0046, Pg. 5 ¶ 0060, Pg. 7 ¶ 0079) the third dataset different from the first and second datasets; (Liu et al., Figs. 3 & 6, Pg. 4 ¶ 0041 - 0046, Pg. 5 ¶ 0057 - 0060, Pg. 6 ¶ 0062 - 0069, Pg. 7 ¶ 0079) and training the base model and the third branch model using the third dataset. (Liu et al., Figs. 3 & 6, Pg. 4 ¶ 0041 - 0046, Pg. 5 ¶ 0057 - 0060, Pg. 6 ¶ 0062 - 0069, Pg. 7 ¶ 0079) - With regards to claim 19, Dong in view of Liu et al. in view of Lin et al. disclose the method of Claim 15, further comprising: training an additional branch model of the machine learning model to perform an additional image processing task, the additional branch model dependent on the first branch model. (Dong, Figs. 1A - 3, Pg. 1 ¶ 0002, Pg. 3 ¶ 0037 - Pg. 4 ¶ 0040, Pg. 4 ¶ 0043 - 0046, Pg. 5 ¶ 0050 - 0052, Pg. 6 ¶ 0054 - 0058, Pg. 7 ¶ 0073 [“face recognition module 210 is operable to make use of the features extracted by the facial feature extraction module 208 to recognize the face present in the image. In an embodiment, the layers of the DNN can be used to match the face present in the image with target faces available in an image database. In an embodiment, other machine learning models and image matching models may be used for matching the extracted features to recognize the face present in the image” and “face recognition at block 308”]) Claims 2, 3, 6, 9, 10, 13, 16, 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Dong U.S. Publication No. 2022/0207260 A1 in view of Liu et al. U.S. Publication No. 2021/0142164 A1 in view of Lin et al. U.S. Publication No. 2020/0349464 A1 as applied to claims 1, 5, 8, 12, 15 and 19 above, and further in view of Yoo et al. U.S. Publication No. 2016/0148080 A1. - With regards to claim 2, Dong in view of Liu et al. in view of Lin et al. disclose method of Claim 1. Dong fails to disclose explicitly wherein: the first image processing task comprises facial segmentation; and the second image processing task comprises facial landmark detection. Pertaining to analogous art, Yoo et al. disclose wherein: the first image processing task comprises facial segmentation; (Yoo et al., Figs. 6, 12, 15B - 15D & 17 - 19, Pg. 5 ¶ 0054 - 0059, Pg. 6 ¶ 0070, Pg. 7 ¶ 0077 - 0081, Pg. 8 ¶ 0099 - 0104, Pg. 8 ¶ 0106 - Pg. 9 ¶ 0107) and the second image processing task comprises facial landmark detection. (Yoo et al., Figs. 10 - 12, 15C, 15D, 17 & 18, Pg. 7 ¶ 0077 - 0086, Pg. 8 ¶ 0101 - 0104, Pg. 8 ¶ 0106 - Pg. 9 ¶ 0108) Dong in view of Liu et al. in view of Lin et al. and Yoo et al. are combinable because they are all directed towards systems and methods that utilize multi-branch neural network models to perform multiple different functions/tasks and, similar to Dong and Lin et al., Yoo et al. is also directed towards processing images to perform various different image processing tasks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Dong in view of Liu et al. in view of Lin et al. with the teachings of Yoo et al. This modification would have been prompted in order to substitute the image processing functions of Dong for the image processing tasks of Yoo et al. The image processing tasks of Yoo et al. could be substituted in place of the image processing functions of Dong utilizing well-known techniques in the art and would likely yield predictable results, in that, in the combination, facial segmentation and facial landmark detection tasks would be performed by the trained machine learning model of the combined base device. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that facial segmentation and facial landmark detection image processing tasks would be performed by the trained machine learning model of the combined base device. Therefore, it would have been obvious to combine Dong in view of Liu et al. in view of Lin et al. with Yoo et al. to obtain the invention as specified in claim 2. - With regards to claim 3, Dong in view of Liu et al. in view of Lin et al. disclose the method of Claim 1. Dong fails to disclose explicitly wherein at least one of the multiple branch models is configured to perform at least one of: gaze estimation, head pose estimation, emotion estimation, heat map generation, and physical attribute estimation. Pertaining to analogous art, Yoo et al. disclose wherein at least one of the multiple branch models is configured to perform at least one of: gaze estimation, head pose estimation, emotion estimation, heat map generation, and physical attribute estimation. (Yoo et al., Pg. 4 ¶ 0048 - 0051, Pg. 5 ¶ 0054 - 0059, Pg. 7 ¶ 0077 - 0085, Pg. 8 ¶ 0099 - 0103 [“whereas the recognizer trained by the trainer 120 may simultaneously recognize an ID, a gender, an age, an ethnicity, an attractiveness, a facial expression, and an emotion from the input image”]) Dong in view of Liu et al. in view of Lin et al. and Yoo et al. are combinable because they are all directed towards systems and methods that utilize multi-branch neural network models to perform multiple different functions/tasks and, similar to Dong and Lin et al., Yoo et al. is also directed towards processing images to perform various different image processing tasks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Dong in view of Liu et al. in view of Lin et al. with the teachings of Yoo et al. This modification would have been prompted in order to enhance the combined base device of Dong in view of Liu et al. in view of Lin et al. with the well-known and applicable technique Yoo et al. applied to a comparable device. Performing at least one of gaze estimation, head pose estimation, emotion estimation, heat map generation, and physical attribute estimation by at least one of the multiple branch models, as taught by Yoo et al., would enhance the combined base device by enabling the machine learning model to perform additional related image processing tasks with respect to input face images thereby allowing for the combined base device to be utilized in various additional image processing applications and increasing its overall appeal and usefulness to potential end-users. Furthermore, this modification would have been prompted by the teachings and suggestions of Yoo et al. that training a machine learning model to recognize a plurality of face elements simultaneously increases its facial recognition accuracy, see at least page 4 paragraphs 0046 and 0048 - 0051, page 5 paragraphs 0054 - 0055 and page 10 paragraphs 0121 - 0122 of Yoo et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that at least one of gaze estimation, head pose estimation, emotion estimation, heat map generation, and physical attribute estimation would be performed by at least one of the multiple branch models so as to enable the machine learning model of the combined base device to be utilized in various additional image processing applications, to increase its overall appeal and usefulness to potential end-users and to help improve its facial recognition accuracy. Therefore, it would have been obvious to combine Dong in view of Liu et al. in view of Lin et al. with Yoo et al. to obtain the invention as specified in claim 3. - With regards to claim 6, Dong in view of Liu et al. in view of Lin et al. disclose the method of Claim 5. Dong fails to disclose explicitly wherein: the first image processing task comprises facial segmentation; the second image processing task comprises facial landmark detection; and the additional image processing task comprises edge detection. Pertaining to analogous art, Yoo et al. disclose wherein: the first image processing task comprises facial segmentation; (Yoo et al., Figs. 6, 12, 15B - 15D & 17 - 19, Pg. 5 ¶ 0054 - 0059, Pg. 6 ¶ 0070, Pg. 7 ¶ 0077 - 0081, Pg. 8 ¶ 0099 - 0104, Pg. 8 ¶ 0106 - Pg. 9 ¶ 0107) the second image processing task comprises facial landmark detection; (Yoo et al., Figs. 10 - 12, 15C, 15D, 17 & 18, Pg. 7 ¶ 0077 - 0086, Pg. 8 ¶ 0101 - 0104, Pg. 8 ¶ 0106 - Pg. 9 ¶ 0108) and the additional image processing task comprises edge detection. (Yoo et al., Figs. 6, 7, 12, 15B - 15D, 17 & 20, Pg. 4 ¶ 0051, Pg. 5 ¶ 0053 - 0056, 0060 and 0062, Pg. 6 ¶ 0064 and 0070 - 0071, Pg. 7 ¶ 0077 - 0079, Pg. 8 ¶ 0099 - 0104, Pg. 8 ¶ 0106, Pg. 9 ¶ 0108) Dong in view of Liu et al. in view of Lin et al. and Yoo et al. are combinable because they are all directed towards systems and methods that utilize multi-branch neural network models to perform multiple different functions/tasks and, similar to Dong and Lin et al. , Yoo et al. is also directed towards processing images to perform various different image processing tasks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Dong in view of Liu et al. in view of Lin et al. with the teachings of Yoo et al. This modification would have been prompted in order to substitute the image processing functions of Dong for the image processing tasks of Yoo et al. The image processing tasks of Yoo et al. could be substituted in place of the image processing functions of Dong utilizing well-known techniques in the art and would likely yield predictable results, in that, in the combination, facial segmentation, facial landmark detection and edge detection image processing tasks would be performed by the trained machine learning model of the combined base device. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that facial segmentation, facial landmark detection and edge detection image processing tasks would be performed by the trained machine learning model of the combined base device. Therefore, it would have been obvious to combine Dong in view of Liu et al. in view of Lin et al. with Yoo et al. to obtain the invention as specified in claim 6. - With regards to claim 9, Dong in view of Liu et al. in view of Lin et al. disclose the electronic device of Claim 8. Dong fails to disclose explicitly wherein: the first image processing task comprises facial segmentation; and the second image processing task comprises facial landmark detection. Pertaining to analogous art, Yoo et al. disclose wherein: the first image processing task comprises facial segmentation; (Yoo et al., Figs. 6, 12, 15B - 15D & 17 - 19, Pg. 5 ¶ 0054 - 0059, Pg. 6 ¶ 0070, Pg. 7 ¶ 0077 - 0081, Pg. 8 ¶ 0099 - 0104, Pg. 8 ¶ 0106 - Pg. 9 ¶ 0107) and the second image processing task comprises facial landmark detection. (Yoo et al., Figs. 10 - 12, 15C, 15D, 17 & 18, Pg. 7 ¶ 0077 - 0086, Pg. 8 ¶ 0101 - 0104, Pg. 8 ¶ 0106 - Pg. 9 ¶ 0108) Dong in view of Liu et al. in view of Lin et al. and Yoo et al. are combinable because they are all directed towards systems and methods that utilize multi-branch neural network models to perform multiple different functions/tasks and, similar to Dong and Lin et al., Yoo et al. is also directed towards processing images to perform various different image processing tasks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Dong in view of Liu et al. in view of Lin et al. with the teachings of Yoo et al. This modification would have been prompted in order to substitute the image processing functions of Dong for the image processing tasks of Yoo et al. The image processing tasks of Yoo et al. could be substituted in place of the image processing functions of Dong utilizing well-known techniques in the art and would likely yield predictable results, in that, in the combination, facial segmentation and facial landmark detection image processing tasks would be performed by the trained machine learning model of the combined base device. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that facial segmentation and facial landmark detection image processing tasks would be performed by the trained machine learning model of the combined base device. Therefore, it would have been obvious to combine Dong in view of Liu et al. in view of Lin et al. with Yoo et al. to obtain the invention as specified in claim 9. - With regards to claim 10, Dong in view of Liu et al. in view of Lin et al. disclose the electronic device of Claim 8. Dong fails to disclose explicitly wherein at least one of the multiple branch models is configured to perform at least one of: gaze estimation, head pose estimation, emotion estimation, heat map generation, and physical attribute estimation. Pertaining to analogous art, Yoo et al. disclose wherein at least one of the multiple branch models is configured to perform at least one of: gaze estimation, head pose estimation, emotion estimation, heat map generation, and physical attribute estimation. (Yoo et al., Pg. 4 ¶ 0048 - 0051, Pg. 5 ¶ 0054 - 0059, Pg. 7 ¶ 0077 - 0085, Pg. 8 ¶ 0099 - 0103 [“whereas the recognizer trained by the trainer 120 may simultaneously recognize an ID, a gender, an age, an ethnicity, an attractiveness, a facial expression, and an emotion from the input image”]) Dong in view of Liu et al. in view of Lin et al. and Yoo et al. are combinable because they are all directed towards systems and methods that utilize multi-branch neural network models to perform multiple different functions/tasks and, similar to Dong and Lin et al., Yoo et al. is also directed towards processing images to perform various different image processing tasks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Dong in view of Liu et al. in view of Lin et al. with the teachings of Yoo et al. This modification would have been prompted in order to enhance the combined base device of Dong in view of Liu et al. in view of Lin et al. with the well-known and applicable technique Yoo et al. applied to a comparable device. Performing at least one of gaze estimation, head pose estimation, emotion estimation, heat map generation, and physical attribute estimation by at least one of the multiple branch models, as taught by Yoo et al., would enhance the combined base device by enabling the machine learning model to perform additional related image processing tasks with respect to input face images thereby allowing for the combined base device to be utilized in various additional image processing applications and increasing its overall appeal and usefulness to potential end-users. Furthermore, this modification would have been prompted by the teachings and suggestions of Yoo et al. that training a machine learning model to recognize a plurality of face elements simultaneously increases its facial recognition accuracy, see at least page 4 paragraphs 0046 and 0048 - 0051, page 5 paragraphs 0054 - 0055 and page 10 paragraphs 0121 - 0122 of Yoo et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that at least one of gaze estimation, head pose estimation, emotion estimation, heat map generation, and physical attribute estimation would be performed by at least one of the multiple branch models so as to enable the machine learning model of the combined base device to be utilized in various additional image processing applications, to increase its overall appeal and usefulness to potential end-users and to help improve its facial recognition accuracy. Therefore, it would have been obvious to combine Dong in view of Liu et al. in view of Lin et al. with Yoo et al. to obtain the invention as specified in claim 10. - With regards to claim 13, Dong in view of Liu et al. in view of Lin et al. disclose the electronic device of Claim 12. Dong fails to disclose explicitly wherein: the first image processing task comprises facial segmentation; the second image processing task comprises facial landmark detection; and the additional image processing task comprises edge detection. Pertaining to analogous art, Yoo et al. disclose wherein: the first image processing task comprises facial segmentation; (Yoo et al., Figs. 6, 12, 15B - 15D & 17 - 19, Pg. 5 ¶ 0054 - 0059, Pg. 6 ¶ 0070, Pg. 7 ¶ 0077 - 0081, Pg. 8 ¶ 0099 - 0104, Pg. 8 ¶ 0106 - Pg. 9 ¶ 0107) the second image processing task comprises facial landmark detection; (Yoo et al., Figs. 10 - 12, 15C, 15D, 17 & 18, Pg. 7 ¶ 0077 - 0086, Pg. 8 ¶ 0101 - 0104, Pg. 8 ¶ 0106 - Pg. 9 ¶ 0108) and the additional image processing task comprises edge detection. (Yoo et al., Figs. 6, 7, 12, 15B - 15D, 17 & 20, Pg. 4 ¶ 0051, Pg. 5 ¶ 0053 - 0056, 0060 and 0062, Pg. 6 ¶ 0064 and 0070 - 0071, Pg. 7 ¶ 0077 - 0079, Pg. 8 ¶ 0099 - 0104, Pg. 8 ¶ 0106, Pg. 9 ¶ 0108) Dong in view of Liu et al. in view of Lin et al. and Yoo et al. are combinable because they are all directed towards systems and methods that utilize multi-branch neural network models to perform multiple different functions/tasks and, similar to Dong and Lin et al., Yoo et al. is also directed towards processing face images to perform various different image processing tasks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Dong in view of Liu et al. in view of Lin et al. with the teachings of Yoo et al. This modification would have been prompted in order to substitute the image processing functions of Dong for the image processing tasks of Yoo et al. The image processing tasks of Yoo et al. could be substituted in place of the image processing functions of Dong utilizing well-known techniques in the art and would likely yield predictable results, in that, in the combination, facial segmentation, facial landmark detection and edge detection image processing tasks would be performed by the trained machine learning model of the combined base device. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that facial segmentation, facial landmark detection and edge detection image processing tasks would be performed by the trained machine learning model of the combined base device. Therefore, it would have been obvious to combine Dong in view of Liu et al. in view of Lin et al. with Yoo et al. to obtain the invention as specified in claim 13. - With regards to claim 16, Dong in view of Liu et al. in view of Lin et al. disclose the method of Claim 15. Dong fails to disclose explicitly wherein: the first image processing task comprises facial segmentation; and the second image processing task comprises facial landmark detection. Pertaining to analogous art, Yoo et al. disclose wherein: the first image processing task comprises facial segmentation; (Yoo et al., Figs. 6, 12, 15B - 15D & 17 - 19, Pg. 5 ¶ 0054 - 0059, Pg. 6 ¶ 0070, Pg. 7 ¶ 0077 - 0081, Pg. 8 ¶ 0099 - 0104, Pg. 8 ¶ 0106 - Pg. 9 ¶ 0107) and the second image processing task comprises facial landmark detection. (Yoo et al., Figs. 10 - 12, 15C, 15D, 17 & 18, Pg. 7 ¶ 0077 - 0086, Pg. 8 ¶ 0101 - 0104, Pg. 8 ¶ 0106 - Pg. 9 ¶ 0108) Dong in view of Liu et al. in view of Lin et al. and Yoo et al. are combinable because they are all directed towards systems and methods that utilize multi-branch neural network models to perform multiple different functions/tasks and, similar to Dong and Lin et al., Yoo et al. is also directed towards processing images to perform various different image processing tasks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Dong in view of Liu et al. in view of Lin et al. with the teachings of Yoo et al. This modification would have been prompted in order to substitute the image processing functions of Dong for the image processing tasks of Yoo et al. The image processing tasks of Yoo et al. could be substituted in place of the image processing functions of Dong utilizing well-known techniques in the art and would likely yield predictable results, in that, in the combination, facial segmentation and facial landmark detection image processing tasks would be performed by the trained machine learning model of the combined base device. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that facial segmentation and facial landmark detection image processing tasks would be performed by the trained machine learning model of the combined base device. Therefore, it would have been obvious to combine Dong in view of Liu et al. in view of Lin et al. with Yoo et al. to obtain the invention as specified in claim 16. - With regards to claim 17, Dong in view of Liu et al. in view of Lin et al. disclose the method of Claim 15. Dong fails to disclose explicitly wherein at least one of the multiple branch models is configured to perform at least one of: gaze estimation, head pose estimation, emotion estimation, heat map generation, and physical attribute estimation. Pertaining to analogous art, Yoo et al. disclose wherein at least one of the multiple branch models is configured to perform at least one of: gaze estimation, head pose estimation, emotion estimation, heat map generation, and physical attribute estimation. (Yoo et al., Pg. 4 ¶ 0048 - 0051, Pg. 5 ¶ 0054 - 0059, Pg. 7 ¶ 0077 - 0085, Pg. 8 ¶ 0099 - 0103 [“whereas the recognizer trained by the trainer 120 may simultaneously recognize an ID, a gender, an age, an ethnicity, an attractiveness, a facial expression, and an emotion from the input image”]) Dong in view of Liu et al. in view of Lin et al. and Yoo et al. are combinable because they are all directed towards systems and methods that utilize multi-branch neural network models to perform multiple different functions/tasks and, similar to Dong and Lin et al., Yoo et al. is also directed towards processing images to perform various different image processing tasks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Dong in view of Liu et al. in view of Lin et al. with the teachings of Yoo et al. This modification would have been prompted in order to enhance the combined base device of Dong in view of Liu et al. in view of Lin et al. with the well-known and applicable technique Yoo et al. applied to a comparable device. Performing at least one of gaze estimation, head pose estimation, emotion estimation, heat map generation, and physical attribute estimation by at least one of the multiple branch models, as taught by Yoo et al., would enhance the combined base device by enabling the machine learning model to perform additional related image processing tasks with respect to input face images thereby allowing for the combined base device to be utilized in various additional image processing applications and increasing its overall appeal and usefulness to potential end-users. Furthermore, this modification would have been prompted by the teachings and suggestions of Yoo et al. that training a machine learning model to recognize a plurality of face elements simultaneously increases its facial recognition accuracy, see at least page 4 paragraphs 0046 and 0048 - 0051, page 5 paragraphs 0054 - 0055 and page 10 paragraphs 0121 - 0122 of Yoo et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that at least one of gaze estimation, head pose estimation, emotion estimation, heat map generation, and physical attribute estimation would be performed by at least one of the multiple branch models so as to enable the machine learning model of the combined base device to be utilized in various additional image processing applications, to increase its overall appeal and usefulness to potential end-users and to help improve its facial recognition accuracy. Therefore, it would have been obvious to combine Dong in view of Liu et al. in view of Lin et al. with Yoo et al. to obtain the invention as specified in claim 17. - With regards to claim 20, Dong in view of Liu et al. in view of Lin et al. disclose the method of Claim 19. Dong fails to disclose explicitly wherein: the first image processing task comprises facial segmentation; the second image processing task comprises facial landmark detection; and the additional image processing task comprises edge detection. Pertaining to analogous art, Yoo et al. disclose wherein: the first image processing task comprises facial segmentation; (Yoo et al., Figs. 6, 12, 15B - 15D & 17 - 19, Pg. 5 ¶ 0054 - 0059, Pg. 6 ¶ 0070, Pg. 7 ¶ 0077 - 0081, Pg. 8 ¶ 0099 - 0104, Pg. 8 ¶ 0106 - Pg. 9 ¶ 0107) the second image processing task comprises facial landmark detection; (Yoo et al., Figs. 10 - 12, 15C, 15D, 17 & 18, Pg. 7 ¶ 0077 - 0086, Pg. 8 ¶ 0101 - 0104, Pg. 8 ¶ 0106 - Pg. 9 ¶ 0108) and the additional image processing task comprises edge detection. (Yoo et al., Figs. 6, 7, 12, 15B - 15D, 17 & 20, Pg. 4 ¶ 0051, Pg. 5 ¶ 0053 - 0056, 0060 and 0062, Pg. 6 ¶ 0064 and 0070 - 0071, Pg. 7 ¶ 0077 - 0079, Pg. 8 ¶ 0099 - 0104, Pg. 8 ¶ 0106, Pg. 9 ¶ 0108) Dong in view of Liu et al. in view of Lin et al. and Yoo et al. are combinable because they are all directed towards systems and methods that utilize multi-branch neural network models to perform multiple different functions/tasks and, similar to Dong and Lin et al., Yoo et al. is also directed towards processing images to perform various different image processing tasks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Dong in view of Liu et al. in view of Lin et al. with the teachings of Yoo et al. This modification would have been prompted in order to substitute the image processing functions of Dong for the image processing tasks of Yoo et al. The image processing tasks of Yoo et al. could be substituted in place of the image processing functions of Dong utilizing well-known techniques in the art and would likely yield predictable results, in that, in the combination, facial segmentation, facial landmark detection and edge detection image processing tasks would be performed by the trained machine learning model of the combined base device. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that facial segmentation, facial landmark detection and edge detection image processing tasks would be performed by the trained machine learning model of the combined base device. Therefore, it would have been obvious to combine Dong in view of Liu et al. in view of Lin et al. with Yoo et al. to obtain the invention as specified in claim 20. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC RUSH whose telephone number is (571) 270-3017. The examiner can normally be reached 9am - 5pm Monday - Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571) 270 - 5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ERIC RUSH/Primary Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Dec 01, 2023
Application Filed
Nov 15, 2025
Non-Final Rejection — §103
Jan 14, 2026
Applicant Interview (Telephonic)
Jan 14, 2026
Examiner Interview Summary
Feb 10, 2026
Response Filed
Apr 04, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586229
COMPUTER IMPLEMENTED METHODS AND DEVICES FOR DETERMINING DIMENSIONS AND DISTANCES OF HEAD FEATURES
2y 5m to grant Granted Mar 24, 2026
Patent 12548292
METHOD AND SYSTEM FOR IDENTIFYING REFLECTIONS IN THERMAL IMAGES
2y 5m to grant Granted Feb 10, 2026
Patent 12548395
SYSTEMS, METHODS AND DEVICES FOR MONITORING BETTING ACTIVITIES
2y 5m to grant Granted Feb 10, 2026
Patent 12541856
MASKING OF OBJECTS IN AN IMAGE STREAM
2y 5m to grant Granted Feb 03, 2026
Patent 12518504
METHOD FOR CALIBRATING AN OBJECT RE-IDENTIFICATION SOLUTION IMPLEMENTING AN ARRAY OF A PLURALITY OF CAMERAS
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
61%
Grant Probability
97%
With Interview (+36.2%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 628 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month