Prosecution Insights
Last updated: April 19, 2026
Application No. 17/947,955

SYNTHETIC MASKED BIOMETRIC SIGNATURES

Non-Final OA §103
Filed
Sep 19, 2022
Examiner
PEARSON, AMANDA HYEONWOO
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Realnetworks LLC
OA Round
3 (Non-Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
18 granted / 25 resolved
+10.0% vs TC avg
Strong +41% interview lift
Without
With
+41.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
25 currently pending
Career history
50
Total Applications
across all art units

Statute-Specific Performance

§101
9.0%
-31.0% vs TC avg
§103
58.4%
+18.4% vs TC avg
§102
13.1%
-26.9% vs TC avg
§112
19.2%
-20.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 25 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on August 29, 2025 has been entered. Claim Status Applicant’s amendment filed on August 29, 2025 is acknowledged. Currently claims 1-20 are pending. Claims 1, 11, and 17 have been amended. Response to Arguments and Amendments Applicant’s arguments, see pages 7-11 of Applicant’s remarks, filed August 29, 2025, with respect to the rejections of claims 1-20 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new grounds of rejection is made in view of Han in view of Jung and Weng. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-4, 6, 9, 11-12, 14-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable of Han et al., US 20190332851, (hereinafter “Han”) in view of Jung et al., US 7227976 B1, (hereinafter “Jung”) in further view of Weng et al., US 20190050632 A1, (hereinafter “Weng”). Regarding claim 1, Han teaches a method, comprising: receiving a first face image of a target person ([0042] “In one general aspect, processor implemented face verifying method includes obtaining a face image for verifying the face image” wherein a first face image of a target person is a face image); combining a (140 – a face image) generating a first biometric signature (wherein a biometric signature, according to the specifications, “may be a multidimensional array or vector representative of features of the face depicted in the face image 206.”) ([0083] “the computing apparatus 120 extracts a face feature from the face image using a feature extractor, and determines whether the face verification is successful based on a result of the comparing of the extracted face feature to the example registration feature registered in a face registering process, e.g., of the same or different computing apparatus 120. The feature extractor refers to a hardware implemented model that outputs feature information, for example, a feature vector” wherein a first biometric signature is a face feature from the face image otherwise defined as a feature vector) of the([0088] “The input image is input to the face verifying apparatus, and represents an input face being a target of face verification.” wherein the target user is the target of face verification); storing the first biometric signature ([0099] “a face feature corresponding to the input face based on the face image and the synthesized face image” wherein the first biometric signature is a face feature) in association with identification information ([0113] “the determined registration feature may be stored in operation 445, such as in a memory of the verifying apparatus.”) of the target person ([0088] “The input image is input to the face verifying apparatus, and represents an input face being a target of face verification.” wherein the target user is the target of face verification); and using the first biometric signature to identify the target person in a target image ([0100] “In operation 250, the face verifying apparatus may determine whether verification is successful based on the determined face feature.” wherein the first biometric signature is the face feature) ([0088] “The input image is input to the face verifying apparatus, and represents an input face being a target of face verification.” wherein the target user is the target of face verification and a target image is the input image). Han does not specifically disclose a mask image wherein the mask image is an image of a face mask covering a particular region of a person’s face. However, Weng teaches a mask image wherein the mask image is an image of a face mask covering a particular region of a person’s face ([0068] “Again for example, it is possible to input the accessory-not-worn face image of a certain user into the generative network corresponding to a mask, to generate a mask-worn face image of the user on the basis of accessory-not-worn face image.” wherein a mask image is a mask-worn face image of the user). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include a mask image of Weng in the facial biometric verification method of Han to improve the accuracy of mask recognition in face images. Han in view of Weng does not specifically disclose a second face image. However, Jung teaches a second face image ([Col.3, lines 44-47] “The image of the user may be superimposed with a pair of sunglasses image 103, hat image 104, or any other predefined virtual objects, which can be attached to and enhance the human face image 800.”) ([Col.5, lines 16-18] “In the Superimposition module 202 step, the information about the facial features is used in order to superimpose 208 objects on to the user's face image 800.” wherein a second face image is the superimposed image). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the facial biometric verification method of Han in view of Weng with the superimposition method of Jung to output an image of a face with a mask. Regarding claim 2, Han in view of Jung and Weng teaches the method of claim 1, wherein the mask (Han -[0094] “Thus, the region corresponding to the masking region in the synthesized face image represents the image information of the example reference image” wherein a mask image is the masking region of a reference image) is an image of a respiration mask or an image of a respirator (Han - [0084] “In another case, the occluding obstacles may be sunglasses, a mask (e.g., health mask)” wherein a respiration mask is a health mask). The motivation for combining Han, Jung, and Weng is the same motivation as used for claim 1. Regarding claim 3, Han in view of Jung and Weng teaches the method of claim 1, comprising: determining a defined region of the first face image (Han - [0088] “The input image is input to the face verifying apparatus, and represents an input face being a target of face verification.” wherein a first face image is the input image); and superimposing the mask image over the first face image within the defined region (Jung - [Col.3, lines 44-47] “The image of the user may be superimposed with a pair of sunglasses image 103, hat image 104, or any other predefined virtual objects, which can be attached to and enhance the human face image 800.”) (Weng - [0068] “Again for example, it is possible to input the accessory-not-worn face image of a certain user into the generative network corresponding to a mask, to generate a mask-worn face image of the user on the basis of accessory-not-worn face image.” wherein a mask image is a mask-worn face image of the user) (Han - [0101] “a corresponding synthesized face image may also be generated in the registering process by replacing image information of a predefined masking region in a registration face image” wherein a defined region is a predefined masking region). The motivation for combining Han, Jung, and Weng is the same motivation as used for claim 1. Regarding claim 4, Han in view of Jung and Weng teaches the method of claim 3, wherein the defined region (Han - [0101] “a corresponding synthesized face image may also be generated in the registering process by replacing image information of a predefined masking region in a registration face image” wherein a defined region is a predefined masking region) is between a bridge of a nose of the first face image (Han; 140 – a face image) and a chin of the first face image (Han - [0095] “when an influence of a mask (e.g., health mask) is to be reduced in the face verification, a position (position at which mask is estimated to be present) for a masking region may be determined based on a position of a nose or a mouth detected from the face region, and a masking region may be determined with a form similar to a determined form of the mask.” wherein the defined region is based on a position or a mouth is between a bridge of a nose and a chin). The motivation for combining Han, Jung, and Weng is the same motivation as used for claim 1. Regarding claim 6, Han in view of Jung and Weng teaches the method of claim 1, comprising: receiving a third face image (Han - [0114] “a verification image 460 in which a verification target is represented may be obtained by photographing a face of a user using a camera” wherein a third face image is a verification image 460); generating a second biometric signature (Han - [0100] “For example, the face verifying apparatus may determine whether the verification is successful based on a result of comparing the face feature determined in operation 240 to a stored registration feature (feature of registered user face)” wherein a second biometric signature is a stored registration feature) of the third face image; calculating a difference between the first biometric signature (Han - [0099] “a face feature corresponding to the input face based on the face image and the synthesized face image” wherein the first biometric signature is a face feature) and the second biometric signature (Han - [0100] “The face verifying apparatus may determine a probabilistic similarity, for example, between the determined face feature and the registration feature” wherein a calculated difference is a probabilistic similarity); comparing the difference to a first set of criteria for authentication (Han - [0100] “For example, the face verifying apparatus may determine whether the verification is successful based on a result of comparing the face feature determined in operation 240 to a stored registration feature (feature of registered user face).” wherein a first set of criteria for authentication is a stored registration feature); and as a result of the difference satisfying a defined threshold, determining that the third face image (Han - [0114] “a verification image 460 in which a verification target is represented may be obtained by photographing a face of a user using a camera” wherein a third face image is a verification image 460) corresponds to the target user (Han - [0100] “In an example, the face verifying apparatus determines that the verification is successful in response to the similarity meeting or being greater than a verification threshold value, and determines that the verification has failed in response to the similarity not meeting or being less than or equal to the verification threshold value.” wherein a defined threshold is a verification threshold value) (Han - [0088] “The input image is input to the face verifying apparatus, and represents an input face being a target of face verification.” wherein the target user is the target of face verification). The motivation for combining Han, Jung, and Weng is the same motivation as used for claim 1. Regarding claim 9, Han in view of Jung and Weng teaches the method of claim 1, wherein superimposing the mask image includes providing the first face image to a neural network trained to generate a new image by superimposing an object image on a target image (Jung - [Col.3, lines 44-47] “The image of the user may be superimposed with a pair of sunglasses image 103, hat image 104, or any other predefined virtual objects, which can be attached to and enhance the human face image 800.”) (Weng - [0068] “Again for example, it is possible to input the accessory-not-worn face image of a certain user into the generative network corresponding to a mask, to generate a mask-worn face image of the user on the basis of accessory-not-worn face image.” wherein a mask image is a mask-worn face image of the user) (Han - [0088] “The input image is input to the face verifying apparatus, and represents an input face being a target of face verification.” wherein a first face image is the input image). The motivation for combining Han, Jung, and Weng is the same motivation as used for claim 1. Regarding claim 11, the claim recites similar limitations to claim 1 but in the form of a system, comprising: one or more processors (Han; 710 – processor); and memory (Han; 720 – memory) coupled to the one or more processors and storing instructions that, as a result of execution by the one or more processors, cause the system to execute the method (Han - [0135] “The memory 720 is a non-transitory computer readable media or device connected to the processor 710, and may store instructions, which when executed by the processor 710, cause the processor 710 to implement one or more or all operations described herein.”) of claim 1. Further, claim 11 contains the additional element not recited in claim 1: obtain a mask image from data storage. Han in view of Weng teaches obtaining a mask image from data storage (Weng - [0068] “Again for example, it is possible to input the accessory-not-worn face image of a certain user into the generative network corresponding to a mask, to generate a mask-worn face image of the user on the basis of accessory-not-worn face image.” wherein a mask image is a mask-worn face image of the user) (Han - [0143] “The storage device 840 stores a database including information, for example, registration features, registered in a face registering process.”). Therefore, claim 11 recites similar limitations to claim 1 and is rejected for similar rationale and reasoning (see the analysis for claim 1 above). Regarding claim 12, the claim recites similar limitations to claim 3 but in the form of a system. Therefore, claim 12 recites similar limitations to claim 3 and is rejected for similar rationale and reasoning (see the analysis for claim 3 above). Regarding claim 14, Han in view of Jung and Weng teaches the system of claim 11, comprising: a neural network trained (Jung - [Col.4, lines 26-28] “Any robust face detector can do the face detection 204. For this particular embodiment a neural network based face detector was used.”) to combine the mask image with the first face image to obtain the second face image (Weng - [0068] “Again for example, it is possible to input the accessory-not-worn face image of a certain user into the generative network corresponding to a mask, to generate a mask-worn face image of the user on the basis of accessory-not-worn face image.” wherein a mask image is a mask-worn face image of the user) (Han - [0088] “The input image is input to the face verifying apparatus, and represents an input face being a target of face verification.” wherein a first face image is the input image) (Jung - [Col.3, lines 44-47] “The image of the user may be superimposed with a pair of sunglasses image 103, hat image 104, or any other predefined virtual objects, which can be attached to and enhance the human face image 800.”) (Jung - [Col.5, lines 16-18] “In the Superimposition module 202 step, the information about the facial features is used in order to superimpose 208 objects on to the user's face image 800.” wherein a second face image is the superimposed image). The motivation for combining Han, Jung, and Weng is the same motivation as used for claim 1. Regarding claim 15, the claim recites similar limitations to claim 6 but in the form of a system. Therefore, claim 15 recites similar limitations to claim 6 and is rejected for similar rationale and reasoning (see the analysis for claim 6 above). Regarding claim 16, Han in view of Jung and Weng teaches the system of claim 11, wherein the mask image (Weng - [0068] “Again for example, it is possible to input the accessory-not-worn face image of a certain user into the generative network corresponding to a mask, to generate a mask-worn face image of the user on the basis of accessory-not-worn face image.” wherein a mask image is a mask-worn face image of the user) is an image of personal protective equipment (Han - [0084] “In another case, the occluding obstacles may be sunglasses, a mask (e.g., health mask)” wherein personal protective equipment is a mask). The motivation for combining Han, Jung, and Weng is the same motivation as used for claim 1. Regarding claim 17, the claim recites similar limitations to claim 11 but in the form of a non-transitory computer-readable medium (Han - [0135] “The memory 720 is a non-transitory computer readable media or device connected to the processor 710, and may store instructions”). Further, claim 17 contains the additional element not recited in claim 11: superimpose the mask image on a defined region of the first face image to obtain a second face image (Jung - [Col.3, lines 44-47] “The image of the user may be superimposed with a pair of sunglasses image 103, hat image 104, or any other predefined virtual objects, which can be attached to and enhance the human face image 800.”) (Han - [0101] “a corresponding synthesized face image may also be generated in the registering process by replacing image information of a predefined masking region in a registration face image” wherein a defined region is a predefined masking region) (Weng - [0068] “Again for example, it is possible to input the accessory-not-worn face image of a certain user into the generative network corresponding to a mask, to generate a mask-worn face image of the user on the basis of accessory-not-worn face image.” wherein a mask image is a mask-worn face image of the user) (Han - [0108] “In operation 330, the face verifying apparatus generates the synthesized face image of the corresponding face image and a reference image based on the determined type of the occlusion region. The face verifying apparatus generates the synthesized face image by replacing image information of the masking region corresponding to the type of the occlusion region in the face image with image information of the reference image.” wherein a second face image is the synthesized face image). Therefore, claim 17 recites similar limitations to claim 11 and is rejected for similar rationale and reasoning (claim 11 recites similar limitations to claim 1, therefore see the analysis for claim 1 above). Regarding claim 18, Han in view of Jung and Weng teaches the non-transitory computer-readable medium (Han - [0141] “The memory 820 is a non-transitory computer readable media or device that store information to be used for the face verification.”) of claim 17, wherein execution of the instructions causes the one or more processors to: receive a third face image of the person captured by a camera (Han - [0114] “a verification image 460 in which a verification target is represented may be obtained by photographing a face of a user using a camera” wherein a third face is a verification image) (Han - [0142] “The camera 830 captures a still image, a video, or both. The processor 810 may control the camera 830 to capture an image, e.g., including a face region”) (Han - [0088] “The input image is input to the face verifying apparatus, and represents an input face being a target of face verification.” wherein the person is a target), the third face image shows the person without a face mask (Han - [0114] “a verification image 460 in which a verification target is represented may be obtained by photographing a face of a user using a camera” wherein a third face is a verification image); generate a second biometric signature of the third face image (Han - [0100] “For example, the face verifying apparatus may determine whether the verification is successful based on a result of comparing the face feature determined in operation 240 to a stored registration feature (feature of registered user face)” wherein a second biometric signature is a stored registration feature); calculate a difference between the first biometric signature (Han - [0099] “a face feature corresponding to the input face based on the face image and the synthesized face image” wherein the first biometric signature is a face feature) and the second biometric signature (Han - [0100] “The face verifying apparatus may determine a probabilistic similarity, for example, between the determined face feature and the registration feature” wherein a calculated difference is a probabilistic similarity); compare the difference to a first set of criteria for authentication (Han - [0100] “For example, the face verifying apparatus may determine whether the verification is successful based on a result of comparing the face feature determined in operation 240 to a stored registration feature (feature of registered user face).” wherein a first set of criteria for authentication is a stored registration feature); and as a result of the difference satisfying a defined threshold, determine that the third face image (Han - [0082] “the computing apparatus 120 obtains a face image 140 of the user 110 using an image obtaining apparatus, for example, a camera 130” wherein the third face image is a face image 140 of the user with no occluding obstacles) corresponds to the identity of the person (Han - [0100] “In an example, the face verifying apparatus determines that the verification is successful in response to the similarity meeting or being greater than a verification threshold value, and determines that the verification has failed in response to the similarity not meeting or being less than or equal to the verification threshold value.” wherein a defined threshold is a verification threshold value) (Han - [0088] “The input image is input to the face verifying apparatus, and represents an input face being a target of face verification.” wherein the person is a target). The motivation for combining Han, Jung, and Weng is the same motivation as used for claim 1. Regarding claim 20, the claim recites similar limitations to claim 3 but in the form of a non-transitory computer-readable medium. Therefore, claim 20 recites similar limitations to claim 3 and is rejected for similar rationale and reasoning (see the analysis for claim 3 above). Claims 5, 7-8, 13, and 19 are rejected under 35 U.S.C. 103 as being unpatentable of Han et al., US 20190332851, (hereinafter “Han”) in view of Jung et al., US 7227976 B1, (hereinafter “Jung”) in further view of Weng et al., US 20190050632 A1, (hereinafter “Weng”) in further view of Perry et al., US 20200097767 A1 (hereinafter “Perry”). Regarding claim 5, Han in view of Jung and Weng teaches the method of claim 3, comprising: determining a defined region (Han - [0101] “a corresponding synthesized face image may also be generated in the registering process by replacing image information of a predefined masking region in a registration face image” wherein a defined region is a predefined masking region) in the first face image (Han - [0088] “The input image is input to the face verifying apparatus, and represents an input face being a target of face verification.” wherein a first face image is the input image). Han in view of Jung and Weng does not specifically disclose determining a first edge of the defined region along a line between a nose bridge in the first face image and a nose tip in the first image; determining a second edge of the defined region relative to a chin of the first face image; and determining a third edge and a fourth edge of the defined region relative to edges of cheeks in the first face image. However, Perry teaches determining a first edge of the defined region along a line between a nose bridge in the first face image and a nose tip in the first image; determining a second edge of the defined region relative to a chin of the first face image; and determining a third edge and a fourth edge of the defined region relative to edges of cheeks (([0075] “Lines/edges of the selected face may be varied/warped to fit those of the face-model up to certain selected geometry threshold. For example, the nose width, eyes' distance, eyes' width or height, face width, forehead height, hair-line, chin line or any other line determining face geometry may be varied/warped in accordance with the selected face-model to a selected threshold.”)([0076] “The feature de-identification (and/or image morphing technique) may be applied to one or more selected features 3020 of the face such as eyes, ears, nose, chin, mouth, hairline, eyebrows, cheeks or any other selected facial feature.”) wherein a first, second, third, and fourth lines/edges are being determined corresponding to a nose, chin, and cheeks) in the first face image. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the facial biometric verification method of Han in view of Jung and Weng with the feature for determining lines between a nose, chin and, cheek of Perry, because determining lines between a nose, chin, and cheek improve the accuracy of facial biometric verification by detecting specific features on a face. Regarding claim 7, Han in view of Jung and Weng teaches the method of claim 6, comprising: assessing (Han - [0088] “The input image is input to the face verifying apparatus, and represents an input face being a target of face verification.” wherein a first face image is the input image) (Han - [0114] “a verification image 460 in which a verification target is represented may be obtained by photographing a face of a user using a camera” wherein a third face image is a verification image 460) corresponds to the target user is as a result of confirming that (Han - [0088] “The input image is input to the face verifying apparatus, and represents an input face being a target of face verification.” wherein the target user is the target of face verification) (Han - [0100] “In an example, the face verifying apparatus determines that the verification is successful in response to the similarity meeting or being greater than a verification threshold value, and determines that the verification has failed in response to the similarity not meeting or being less than or equal to the verification threshold value.” wherein the result to confirm the image corresponds to the identity is the verification of similarity). Han in view of Jung and Weng does not specifically disclose one or more characteristics of a second set of criteria. However, Perry teaches one or more characteristics of a second set of criteria ([0074] “The suitable face model may be generated using one or more geometrical and color tone parameters such as face size, selected distances (e.g. between eyes, ears, mouth to nose etc.), features' sizes and shape, and color variations and tone” wherein the second set of criteria involving one or more characteristics is geometrical and color tone parameters). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the facial biometric verification method of Han in view of Jung and Weng by using a set of facial characteristics as another set of criteria of Perry for improved biometric authentication. Regarding claim 8, Han in view of Jung and Weng and Perry teaches the method of claim 7, wherein the second set of criteria include criteria involving one or more characteristics of a size of the first face image (Han - [0088] “The input image is input to the face verifying apparatus, and represents an input face being a target of face verification.” wherein a first face image is the input image), a pose of a face in the first face image, a sharpness of the first face image, and a contrast quality of the first face image (Perry - [0074] “The suitable face model may be generated using one or more geometrical and color tone parameters such as face size, selected distances (e.g. between eyes, ears, mouth to nose etc.), features' sizes and shape, and color variations and tone” wherein the second set of criteria involving one or more characteristics is geometrical and color tone parameters such as face size). The motivation for combining Han, Jung, Weng, and Perry is the same motivation as used for claim 7 above. Regarding claim 13, the claim recites similar limitations to claim 5 but in the form of a system. Therefore, claim 13 recites similar limitations to claim 5 and is rejected for similar rationale and reasoning (see the analysis for claim 5 above). Regarding claim 19, the claim recites similar limitations to claim 7 but in the form of a non-transitory computer-readable medium (Han - [0135] “The memory 720 is a non-transitory computer readable media or device connected to the processor 710, and may store instructions”). Therefore, claim 19 recites similar limitations to claim 7 and is rejected for similar rationale and reasoning (see the analysis for claim 7 above). Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable of Han et al., US 20190332851, (hereinafter “Han”) in view of Jung et al., US 7227976 B1, (hereinafter “Jung”) in further view of Weng et al., US 20190050632 A1, (hereinafter “Weng”) in further view of Atsmon et al., US 20190228571 A1, (hereinafter “Atsmon”). Regarding claim 10, Han in view of Jung and Weng teaches the method of claim 9, wherein the neural network (Han - [0091] “The face verifying apparatus may be configured to perform one or more or all neural network verification trainings”) is a Han in view of Jung and Weng does not specifically disclose a generative adversarial network or a variational autoencoder. However, Atsmon teaches a generative adversarial network or a variational autoencoder ([0080] “Using one or more techniques, for example, a Conditional Generative Adversarial Neural Network (cGAN),”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the facial biometric verification method of Han in view of Jung and Weng by using a generative adversarial network of Atsmon to ensure all facial biometric data is protected when being analyzed. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMANDA PEARSON whose telephone number is (703)-756-5786. The examiner can normally be reached Monday - Friday 9:00 - 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached on (571)- 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMANDA H PEARSON/Examiner, Art Unit 2666 /MING Y HON/Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Sep 19, 2022
Application Filed
Dec 10, 2024
Non-Final Rejection — §103
Mar 17, 2025
Response Filed
May 29, 2025
Final Rejection — §103
Aug 04, 2025
Response after Non-Final Action
Aug 29, 2025
Request for Continued Examination
Sep 02, 2025
Response after Non-Final Action
Sep 10, 2025
Non-Final Rejection — §103
Dec 18, 2025
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602898
Neural Semantic Fields for Generalizable Semantic Segmentation of 3D Scenes
2y 5m to grant Granted Apr 14, 2026
Patent 12591994
USING LIGHT AND SHADOW IN VISION-AIDED PRECISE POSITIONING
2y 5m to grant Granted Mar 31, 2026
Patent 12579632
ANOMALY DETECTION FOR PRODUCT QUALITY CONTROL
2y 5m to grant Granted Mar 17, 2026
Patent 12555189
IMAGE RECOVERY PROCESSOR UTILIZING FRAMEWORK FOR GENERATING SPARSITY REGULARIZERS FOR IMAGE RESTORATION
2y 5m to grant Granted Feb 17, 2026
Patent 12548183
Privacy Preserving Sensor Including a Machine-Learned Object Detection Model
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+41.2%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 25 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month