Prosecution Insights
Last updated: April 19, 2026
Application No. 18/171,653

DEVICE AND METHOD FOR CALCULATING SWINGING DIRECTION OF HUMAN FACE IN OBSCURED HUMAN FACE IMAGE

Final Rejection §103
Filed
Feb 20, 2023
Examiner
PHAM, NHUT HUY
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Industrial Technology Research Institute
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
42 granted / 53 resolved
+17.2% vs TC avg
Strong +27% interview lift
Without
With
+26.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
31 currently pending
Career history
84
Total Applications
across all art units

Statute-Specific Performance

§101
9.4%
-30.6% vs TC avg
§103
62.2%
+22.2% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
14.5%
-25.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 53 resolved cases

Office Action

§103
DETAIL OFFICE ACTIONS The United States Patent & Trademark Office appreciates the response filed for the current application that is submitted on 07/31/2025. The United States Patent & Trademark Office reviewed the following documents submitted and has made the following comments below. Amendment Applicant submitted amendments on 07/31/2025. The Examiner acknowledges the amendment and has reviewed the claims accordingly. Applicant Arguments: Applicant/s state/s that the cited prior arts do not teach the amended claims, specifically, the Applicant states Lee does not disclose “a calculated swinging direction”; therefore, the rejection under 35 U.S.C. 103 should be withdrawn. Examiner’s Responses: In response, the Examiner respectfully disagrees. The Examiner in the previous office action cited Yang (¶ [0078, 0081-0085 and 0095]) to teach “use the updated feature anchor point and the adjusted three-dimensional model to calculate a swinging direction of the human face” on page 6. However, it looks like the Applicant is arguing that Lee recites this limitation. Specifically, the Examiner finds that Yang teaches obtaining obtain 2D and 3D coordinates of facial landmarks, which correspond to feature anchor point and three-dimensional model respectively, and use the obtained 2D and 3D coordinates of facial landmarks to estimate head pose angles, which correspond to swinging direction of human face. The Applicant is making an argument that cannot be applied to Lee. Additionally, the limitations of the claims were identified and correlated with the references as indicated above and in the first office action on the merits. Applicant has merely made the allegation that the limitations are not met, and thus has not provided any evidence or argument directed to how the identified elements in the first action fail to meet the claimed limitations or to how the identified elements are otherwise distinguishable from the claimed limitations as is required by 37 CFR §1.111(b). Therefore, the Examiner will maintain the rejection. Claim Status Claims 1-2, 6-12 and 16-20 are rejected under 35 USC § 103: Claims 1, 6-8, 10-11, 16-18 and 20 are rejected over Yang in view of Muhi in view of Brandt, and further in view of Lee. Claims 2 and 12 are rejected over Yang in view of Muhi in view of Brandt in view of Lee, and further in view of Cheng. Claims 3-5 and 13-15 are objected. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 6-8, 10-11, 16-18 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. (US-20200257895-A1, hereinafter Yang) in view of Muhi et al. (Muhi, Omar Adel et al. "Transfer learning for robust masked face recognition." IEEE, hereinafter Muhi) in view of Brandt et al. (US-20140099031-A1, cited in IDS, hereinafter Brandt), and further in view of Lee et al. (Lee, Sung Joo, et al. "Real-time gaze estimator based on driver's head orientation for forward collision warning system." IEEE, hereinafter Lee) CLAIM 1 In regards to Claim 1, Yang teaches a device for calculating a swinging direction of a human face (Yang, ¶ [0033]: “a system for estimating head pose angles of a user.”) in an obscured human face image (Yang, ¶ [0062]: “a user can wear a mask … a registered front head pose image of the user wearing the mask”), the device comprising: an image capturing device (Yang, ¶ [0040]: “a camera taking the image”; FIG. 4A, camera 404); a storage medium (Yang, ¶ [0059]: “a computer-readable storage device such as one or more non-transitory memories or other types of hardware-based storage devices”) storing a three-dimensional model (Yang, ¶ [0082]: “a standard 3D face model”); and a processor coupled to the image capturing device and the storage medium (Yang, ¶ [0040]: “one or more processors”), wherein the processor is configured to: capture an obscured human face image comprising a human face through the image capturing device (Yang, ¶ [0060]: “a camera can be used to monitor a user (e.g., a vehicle driver) and take periodic images of the user”, ¶ [0062]: “a user can wear a mask”); use face detection technology to obtain a feature anchor point (Yang, ¶ [0009]: “detect the plurality of facial landmarks within the first image and the second image”, [0015]: “determine a first set of two-dimensional (2D) coordinates of the plurality of facial landmarks of the user based on the first image.”) to be replaced in the obscured human face image(Yang, ¶ [0062]: “a user can wear a mask … a registered front head pose image of the user wearing the mask”); perform an adjustment operation on a three-dimensional model (Yang, ¶ [0082-0083]: “aligning the proposed key facial landmarks to a standard 3D face model to obtain the 3D coordinates of these facial landmarks (in the coordinate system of the face model)”) to obtain an adjusted three-dimensional model (Yang, ¶ [0082-0083]: “obtain the 3D coordinates of these facial landmarks (in the coordinate system of the face model)”. The Examiner notes 3D coordinates is a representation of a 3D model); and use the updated feature anchor point (Yang, ¶ [0078]: “key facial landmarks can be parameterized, which can include determining the landmark coordinates in relation to one or more of the new head pose axes.” Facial land marks can be parameterized, and later used in determine head pose angles) and the adjusted three-dimensional model to calculate a swinging direction of the human face. (Yang, FIG. 4B, operation 428 and operation 452; ¶ [0081-0085]: “an online process 418 for estimating the head pose angles can take place… At operation 428, the original rotation angles α0, β0, and γ0 can be calculated and the front head pose can be registered with the original rotation angles. As used herein, angles αi, βi, and γi refer to rotation angles associated with roll, pitch, and yaw of the user's head”, [0095]: “operation 452”; Yang teaches 3D coordinates of landmarks (3D model) and parameters of 2D landmarks are used to estimate head pose angles) Yang does not explicitly disclose using non-obscured face detection technology to obtain a feature anchor point to be replaced in the obscured human face image, using obscured face detection technology to obtain a plurality of candidate feature anchor points in the obscured human face image. Muhi is in the same field of art of facial detection techniques. Further, Muhi teaches using non-obscured face detection technology to obtain a feature anchor point to be replaced in the obscured human face image (Muhi, Page 3 and 4. See reconstructed text and annotated table below. Muhi teaches using 2 different face detection methods to detect facial landmarks, Dlib and MediaPipe. MediaPipe scored higher in masked face detection.), using obscured face detection technology to obtain a plurality of candidate feature anchor points in the obscured human face image. (Muhi, Page 3 and 4. See reconstructed text and annotated table below. Muhi teaches using 2 different face detection methods to detect facial landmarks, Dlib and MediaPipe. MediaPipe scored higher in masked face detection.) PNG media_image1.png 217 1025 media_image1.png Greyscale PNG media_image2.png 1278 1170 media_image2.png Greyscale PNG media_image1.png 217 1025 media_image1.png Greyscale Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to simply substitute Yang’s face detection method with face detection methods that is taught by Muhi, to make a facial landmark detecting system based on Dlib and MediaPipe; thus, one of ordinary skilled in the art would be motivated to combine the references since Yang disclose detecting facial landmarks, and Muhi teaches methods to detect facial landmarks (Muhi, Page 3, section A. Face Detection, see reconstructed text below). The combination of Yang and Muhi does not explicitly disclose using the plurality of candidate feature anchor points to determine an updated feature anchor point corresponding to the feature anchor point to be replaced. Brandt is in the same field of art of facial detection. Further, Brandt teaches using the plurality of candidate feature anchor points (Brandt, ¶ [0036-0039]: “the global optimization may find the sequence of candidate locations having a maximum sum of unary and binary scores in O(NM2) time where N is the number of landmark feature points and M is the number of candidate locations for each point.” Brandt teaches a plurality of candidate facial landmarks for a detected facial landmark) to determine an updated feature anchor point corresponding to the feature anchor point to be replaced. (Brandt, [0040-0047]: “a shape model (e.g., a component-based shape model) may be applied to update the respective feature point locations for each object component of the detected object”. Brandt teaches using a shape model to update feature points based on the candidates) Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Yang and Muhi by incorporating facial landmarks localization method that is taught by Brand, to make a face detection system that can adjust detected landmarks; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspects, the present invention recognizes there is a need to obtain facial landmarks in a more accurate manner (Brandt, ¶ [0073]: “The disclosed techniques may localize feature points in a reliable and accurate manner under a broad range of appearance variation”). The combination of Yang, Muhi and Brandt teaches the swinging direction comprises a first swinging direction at a first time point (Yang, ¶ [0006]: “determining, by one or more processors, a first rotation between a first head pose axis associated with a first image of a plurality of images of the user”, ¶ [0084]: “the head pose angle calculation in image sequences”. Yang teaches images are captured in sequences, so first image and second image are captured at different time point) and comprises a second swinging direction at a second time point. (Yang, ¶ [0006]: “determine a second rotation between a second head pose axis associated with a second image of the plurality of images of the user”, ¶ [0084]: “the head pose angle calculation in image sequences”. Yang teaches images are captured in sequences, so first image and second image are captured at different time point), an output device (Yang, ¶ [0145]: “The output interface 1530 may interface to or include a display device, such as a touchscreen”) The combination of Yang, Muhi and Brandt does not explicitly disclose using a moving average algorithm to display the first swinging direction and the second swinging direction. PNG media_image5.png 1052 3115 media_image5.png Greyscale Lee is in the same field of art of analyzing facial images. Further, Lee teaches using a moving average algorithm to display the first swinging direction and the second swinging direction. (Lee, Page 258-259, section D. Pitch Estimation; FIG. 10, see below. Lee teaches using moving average to present the pitch angle) Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Yang, Muhi and Brandt by incorporating the method applying moving average to head orientation angle that is taught by Lee, to make a system that is able to present head orientation angle using moving average; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspects, the present invention recognizes there is a need to present the data in a more reliable manner (Lee, “we can find that the true mfrontal and sfrontal values can reliably be found using the moving average method.”). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. CLAIM 6 In regards to Claim 6, the combination of Yang, Muhi, Brandt and Lee teaches the device of Claim 1. In addition, the combination of Yang, Muhi, Brandt and Lee teaches the three-dimensional model comprises a plurality of point locations (Yang, ¶ [0061]: “translational components (e.g., three-dimensional (3D) coordinates of key facial landmarks in relation to the new coordinate system axis) can be determined based on an initial image of the user,”), and the plurality of point locations comprise a reference point location and a plurality of remaining point locations (Yang, ¶ [0077]: “In some aspects, the origin of a head pose axis is set to the center of a user's head (e.g., point G in FIG. 1)” Yang teaches one point in the plurality of facial points can be set as an origin point.), wherein compute coordinates of each of the plurality of remaining point locations according to reference coordinates of the reference point location. (Yang, ¶ [0082]: “obtain the 3D coordinates of these facial landmarks (in the coordinate system of the face model), and then adjusting the results based on the origin of the new head pose axis O2.” Yang teaches adjust other facial landmarks based on one origin point.) CLAIM 7 In regards to Claim 7, the combination of Yang, Muhi, Brandt and Lee teaches the device of Claim 6. In addition, the combination of Yang, Muhi, Brandt and Lee teaches the plurality of point locations comprise a central axis point location (Muhi, see FIG. 3, the feature point between two eyes) and a non-central axis point location (Muhi, see FIG. 3, the feature points on two eyes), wherein the non-central axis point location comprises a first point location and a second point location (Muhi, see FIG. 3, the feature points on two eyes), and the first point location and the second point location are relative to a central axis of the three-dimensional model (Yang, see FIG. 8, diagram 804, two feature points of two eyes are symmetrical across the central axis Y2.), wherein the processor is further configured to: set an updated X-axis coordinate of the central axis point location to 0 (Yang, ¶ [0085]: “the midpoint between the centers of corneal curvatures can be set as the origin of the head pose axis axis_middle for this image”, the point between 2 eyes can be set as the origin, which has the coordinates of zeroes); and use original coordinates of the first point location and original coordinates of the second point location to calculate update coordinates of the first point location and calculate update coordinates of the second point location. (Brandt, ¶ [0027]: “the appearance search consists of sampling a range of offsets perpendicularly from the current location and identifying the offset position that minimizes the Mahalanobis distance…. The new location for each feature point is the location that minimizes this distance among the set of possible locations tested”. Brandt teaches the updated feature point must be the nearest point to the location of original feature point.) CLAIM 8 In regards to Claim 8, the combination of Yang, Muhi, Brandt and Lee teaches the device of Claim 1. In addition, the combination of Yang, Muhi, Brandt and Lee teaches the updated feature anchor point corresponds to a glabella of the human face (Muhi, see FIG. 3, facial feature points are detected at the glabella of the face); translate the updated feature anchor point to a central position in the obscured human face image. (Yang, ¶ [0085]: “the midpoint between the centers of corneal curvatures can be set as the origin of the head pose axis axis_middle for this image. At operation 426, the facial landmark parameter set para_middle (e.g., as determined during the offline process 410) can be retrieved.”) CLAIM 10 In regards to Claim 10, the combination of Yang, Muhi, Brandt and Lee teaches the device of Claim 1. In addition, the combination of Yang, Muhi, Brandt and Lee teaches the obscured human face image comprises a color image (Muhi, FIG. 2&3) or an infrared image. (Yang, ¶ [0070]: “the camera 250 can be a near infrared (NIR) camera”, FIG. 2B) CLAIM 11 In regards to Claim 11, Yang teaches a method for calculating a swinging direction of a human face (Yang, ¶ [0033]: “a system for estimating head pose angles of a user.”) in an obscured human face image (Yang, ¶ [0062]: “a user can wear a mask … a registered front head pose image of the user wearing the mask”), the method comprising: capture an obscured human face image comprising a human face (Yang, ¶ [0060]: “a camera can be used to monitor a user (e.g., a vehicle driver) and take periodic images of the user”, ¶ [0062]: “a user can wear a mask”); use face detection technology to obtain a feature anchor point (Yang, ¶ [0009]: “detect the plurality of facial landmarks within the first image and the second image”, [0015]: “determine a first set of two-dimensional (2D) coordinates of the plurality of facial landmarks of the user based on the first image.”) to be replaced in the obscured human face image(Yang, ¶ [0062]: “a user can wear a mask … a registered front head pose image of the user wearing the mask”); perform an adjustment operation on a three-dimensional model (Yang, ¶ [0082-0083]: “aligning the proposed key facial landmarks to a standard 3D face model to obtain the 3D coordinates of these facial landmarks (in the coordinate system of the face model)”) to obtain an adjusted three-dimensional model (Yang, ¶ [0082-0083]: “obtain the 3D coordinates of these facial landmarks (in the coordinate system of the face model)”. The Examiner notes 3D coordinates is a representation of a 3D model); and use the updated feature anchor point (Yang, ¶ [0078]: “key facial landmarks can be parameterized, which can include determining the landmark coordinates in relation to one or more of the new head pose axes.” Facial land marks can be parameterized, and later used in determine head pose angles) and the adjusted three-dimensional model to calculate a swinging direction of the human face. (Yang, FIG. 4B, operation 428 and operation 452; ¶ [0081-0085]: “an online process 418 for estimating the head pose angles can take place… At operation 428, the original rotation angles α0, β0, and γ0 can be calculated and the front head pose can be registered with the original rotation angles. As used herein, angles αi, βi, and γi refer to rotation angles associated with roll, pitch, and yaw of the user's head”, [0095]: “operation 452”; Yang teaches 3D coordinates of landmarks (3D model) and parameters of 2D landmarks are used to estimate head pose angles) Yang does not explicitly disclose using non-obscured face detection technology to obtain a feature anchor point to be replaced in the obscured human face image, using obscured face detection technology to obtain a plurality of candidate feature anchor points in the obscured human face image. PNG media_image1.png 217 1025 media_image1.png Greyscale PNG media_image2.png 1278 1170 media_image2.png Greyscale Muhi is in the same field of art of facial detection techniques. Further, Muhi teaches using non-obscured face detection technology to obtain a feature anchor point to be replaced in the obscured human face image (Muhi, Page 3 and 4. See reconstructed text and annotated table below. Muhi teaches using 2 different face detection methods to detect facial landmarks, Dlib and MediaPipe. MediaPipe scored higher in masked face detection.), using obscured face detection technology to obtain a plurality of candidate feature anchor points in the obscured human face image. (Muhi, Page 3 and 4. See reconstructed text and annotated table below. Muhi teaches using 2 different face detection methods to detect facial landmarks, Dlib and MediaPipe. MediaPipe scored higher in masked face detection.) PNG media_image1.png 217 1025 media_image1.png Greyscale Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to simply substitute Yang’s face detection method with face detection methods that is taught by Muhi, to make a facial landmark detecting system based on Dlib and MediaPipe; thus, one of ordinary skilled in the art would be motivated to combine the references since Yang disclose detecting facial landmarks, and Muhi teaches methods to detect facial landmarks (Muhi, Page 3, section A. Face Detection, see reconstructed text below). The combination of Yang and Muhi does not explicitly disclose using the plurality of candidate feature anchor points to determine an updated feature anchor point corresponding to the feature anchor point to be replaced. Brandt is in the same field of art of facial detection. Further, Brandt teaches using the plurality of candidate feature anchor points (Brandt, ¶ [0036-0039]: “the global optimization may find the sequence of candidate locations having a maximum sum of unary and binary scores in O(NM2) time where N is the number of landmark feature points and M is the number of candidate locations for each point.” Brandt teaches a plurality of candidate facial landmarks for a detected facial landmark) to determine an updated feature anchor point corresponding to the feature anchor point to be replaced. (Brandt, [0040-0047]: “a shape model (e.g., a component-based shape model) may be applied to update the respective feature point locations for each object component of the detected object”. Brandt teaches using a shape model to update feature points based on the candidates) Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Yang and Muhi by incorporating facial landmarks localization method that is taught by Brand, to make a face detection system that can adjust detected landmarks; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspects, the present invention recognizes there is a need to obtain facial landmarks in a more accurate manner (Brandt, ¶ [0073]: “The disclosed techniques may localize feature points in a reliable and accurate manner under a broad range of appearance variation”). The combination of Yang, Muhi and Brandt teaches the swinging direction comprises a first swinging direction at a first time point (Yang, ¶ [0006]: “determining, by one or more processors, a first rotation between a first head pose axis associated with a first image of a plurality of images of the user”, ¶ [0084]: “the head pose angle calculation in image sequences”. Yang teaches images are captured in sequences, so first image and second image are captured at different time point) and comprises a second swinging direction at a second time point. (Yang, ¶ [0006]: “determine a second rotation between a second head pose axis associated with a second image of the plurality of images of the user”, ¶ [0084]: “the head pose angle calculation in image sequences”. Yang teaches images are captured in sequences, so first image and second image are captured at different time point), an output device (Yang, ¶ [0145]: “The output interface 1530 may interface to or include a display device, such as a touchscreen”) The combination of Yang, Muhi and Brandt does not explicitly disclose using a moving average algorithm to display the first swinging direction and the second swinging direction. PNG media_image5.png 1052 3115 media_image5.png Greyscale Lee is in the same field of art of analyzing facial images. Further, Lee teaches using a moving average algorithm to display the first swinging direction and the second swinging direction. (Lee, Page 258-259, section D. Pitch Estimation; FIG. 10, see below. Lee teaches using moving average to present the pitch angle) Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Yang, Muhi and Brandt by incorporating the method applying moving average to head orientation angle that is taught by Lee, to make a system that is able to present head orientation angle using moving average; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspects, the present invention recognizes there is a need to present the data in a more reliable manner (Lee, “we can find that the true mfrontal and sfrontal values can reliably be found using the moving average method.”). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. CLAIM 16 In regards to Claim 16, the combination of Yang, Muhi, Brandt and Lee teaches the method of Claim 11. In addition, the combination of Yang, Muhi, Brandt and Lee teaches the three-dimensional model comprises a plurality of point locations (Yang, ¶ [0061]: “translational components (e.g., three-dimensional (3D) coordinates of key facial landmarks in relation to the new coordinate system axis) can be determined based on an initial image of the user,”), and the plurality of point locations comprise a reference point location and a plurality of remaining point locations (Yang, ¶ [0077]: “In some aspects, the origin of a head pose axis is set to the center of a user's head (e.g., point G in FIG. 1)” Yang teaches one point in the plurality of facial points can be set as an origin point.), wherein compute coordinates of each of the plurality of remaining point locations according to reference coordinates of the reference point location. (Yang, ¶ [0082]: “obtain the 3D coordinates of these facial landmarks (in the coordinate system of the face model), and then adjusting the results based on the origin of the new head pose axis O2.” Yang teaches adjust other facial landmarks based on one origin point.) CLAIM 17 In regards to Claim 17, the combination of Yang, Muhi, Brandt and Lee teaches the method of Claim 6. In addition, the combination of Yang, Muhi, Brandt and Lee teaches the plurality of point locations comprise a central axis point location (Muhi, see FIG. 3, the feature point between two eyes) and a non-central axis point location (Muhi, see FIG. 3, the feature points on two eyes), wherein the non-central axis point location comprises a first point location and a second point location (Muhi, see FIG. 3, the feature points on two eyes), and the first point location and the second point location are relative to a central axis of the three-dimensional model (Yang, see FIG. 8, diagram 804, two feature points of two eyes are symmetrical across the central axis Y2.), wherein the processor is further configured to: set an updated X-axis coordinate of the central axis point location to 0 (Yang, ¶ [0085]: “the midpoint between the centers of corneal curvatures can be set as the origin of the head pose axis axis_middle for this image”, the point between 2 eyes can be set as the origin, which has the coordinates of zeroes); and use original coordinates of the first point location and original coordinates of the second point location to calculate update coordinates of the first point location and calculate update coordinates of the second point location. (Brandt, ¶ [0027]: “the appearance search consists of sampling a range of offsets perpendicularly from the current location and identifying the offset position that minimizes the Mahalanobis distance…. The new location for each feature point is the location that minimizes this distance among the set of possible locations tested”. Brandt teaches the updated feature point must be the nearest point to the location of original feature point.) CLAIM 18 In regards to Claim 8, the combination of Yang, Muhi, Brandt and Lee teaches the method of Claim 11. In addition, the combination of Yang, Muhi, Brandt and Lee teaches the updated feature anchor point corresponds to a glabella of the human face (Muhi, see FIG. 3, facial feature points are detected at the glabella of the face); translate the updated feature anchor point to a central position in the obscured human face image. (Yang, ¶ [0085]: “the midpoint between the centers of corneal curvatures can be set as the origin of the head pose axis axis_middle for this image. At operation 426, the facial landmark parameter set para_middle (e.g., as determined during the offline process 410) can be retrieved.”) CLAIM 20 In regards to Claim 10, the combination of Yang, Muhi, Brandt and Lee teaches the method of Claim 11. In addition, the combination of Yang, Muhi, Brandt and Lee teaches the obscured human face image comprises a color image (Muhi, FIG. 2&3) or an infrared image. (Yang, ¶ [0070]: “the camera 250 can be a near infrared (NIR) camera”, FIG. 2B) Claim(s) 2 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yang in view of Muhi in view of Brandt in view of Lee, and further in view of Cheng et al. (US-20220139107-A1, hereinafter Cheng) CLAIM 2 In regards to Claim 2, the combination of Yang, Muhi, Brandt and Lee teaches the device of Claim 1. The combination of Yang, Muhi, Brandt and Lee does not explicitly disclose each of the plurality of candidate feature anchor points corresponds to a weight, and each of the plurality of candidate feature anchor points corresponds to candidate coordinates, wherein the processor is further configured to: use the weight and the candidate coordinates to calculate updated coordinates of the updated feature anchor point. Cheng is in the same field of art of facial detection. Further, Cheng teaches each of the plurality of candidate feature anchor points corresponds to a weight (Cheng, ¶ [0036-0042]: “…weight the respective detected landmark position and the respective optical flow position of the landmark in the second image …”. Cheng teaches the position of a landmark is selected between positions of two candidate landmarks; Cheng also disclose weighting two candidates landmarks.), and each of the plurality of candidate feature anchor points corresponds to candidate coordinates (Cheng, ¶ [0020]: “…“identifying a landmark position” refer to determining or identifying a position and/or location of that facial landmark, for example in a two-dimensional coordinate system”, Cheng teaches position of a landmark is coordinates.), wherein the processor is further configured to: use the weight and the candidate coordinates to calculate updated coordinates of the updated feature anchor point. (Cheng, ¶ [0036-0042]: “…determine the position for the landmark in the second image based on the respective detected landmark position and the respective optical flow position of the landmark…”, Cheng teaches the position of a landmark is selected between positions of two weighted candidate landmarks) Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Yang, Muhi and Brandt by incorporating weighted based landmark localization method that is taught by Cheng, to make a landmark localization system that give weight to candidates; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspects, the present invention recognizes there is a need to accurate facial landmarks determination (Cheng, ¶ [0014]: “accurate mask locations may depend on accurate determination of facial landmarks”). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. CLAIM 12 In regards to Claim 12, the combination of Yang, Muhi, Brandt and Lee teaches the method of Claim 11. The combination of Yang, Muhi, Brandt and Lee does not explicitly disclose each of the plurality of candidate feature anchor points corresponds to a weight, and each of the plurality of candidate feature anchor points corresponds to candidate coordinates, wherein the processor is further configured to: use the weight and the candidate coordinates to calculate updated coordinates of the updated feature anchor point. Cheng is in the same field of art of facial detection. Further, Cheng teaches each of the plurality of candidate feature anchor points corresponds to a weight (Cheng, ¶ [0036-0042]: “…weight the respective detected landmark position and the respective optical flow position of the landmark in the second image …”. Cheng teaches the position of a landmark is selected between positions of two candidate landmarks; Cheng also disclose weighting two candidates landmarks.), and each of the plurality of candidate feature anchor points corresponds to candidate coordinates (Cheng, ¶ [0020]: “…“identifying a landmark position” refer to determining or identifying a position and/or location of that facial landmark, for example in a two-dimensional coordinate system”, Cheng teaches position of a landmark is coordinates.), wherein the processor is further configured to: use the weight and the candidate coordinates to calculate updated coordinates of the updated feature anchor point. (Cheng, ¶ [0036-0042]: “…determine the position for the landmark in the second image based on the respective detected landmark position and the respective optical flow position of the landmark…”, Cheng teaches the position of a landmark is selected between positions of two weighted candidate landmarks) Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Yang, Muhi and Brandt by incorporating weighted based landmark localization method that is taught by Cheng, to make a landmark localization system that give weight to candidates; thus, one of ordinary skilled in the art would be motivated to combine the references since among its several aspects, the present invention recognizes there is a need to accurate facial landmarks determination (Cheng, ¶ [0014]: “accurate mask locations may depend on accurate determination of facial landmarks”). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Allowable Subject Matter Claims 3-5 and 13-15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NHUT HUY (JEREMY) PHAM whose telephone number is (703)756-5797. The examiner can normally be reached Mo - Fr. 8:30am - 6pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, O'Neal Mistry can be reached on (313)446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. NHUT HUY (JEREMY) PHAMExaminerArt Unit 2674 /Ross Varndell/Primary Examiner, Art Unit 2674
Read full office action

Prosecution Timeline

Feb 20, 2023
Application Filed
Apr 30, 2025
Non-Final Rejection — §103
Jul 31, 2025
Response Filed
Sep 16, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598397
DIRT DETECTION METHOD AND DEVICE FOR CAMERA COVER
2y 5m to grant Granted Apr 07, 2026
Patent 12598074
FACIAL RECOGNITION METHOD AND APPARATUS, DEVICE, AND MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12597254
TRACKING OPERATING ROOM PHASE FROM CAPTURED VIDEO OF THE OPERATING ROOM
2y 5m to grant Granted Apr 07, 2026
Patent 12592087
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12579622
METHOD AND APPARATUS FOR PROCESSING IMAGE SIGNAL, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+26.8%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 53 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month