DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. This is in response to the applicant response filed on 02/05/2026. In the applicant’s response, claims 1-2, 5-6, 8, 11-12, 15-16, and 20 were amended. Accordingly, claims 1-20 are pending and being examined. Claims 1, 11, and 20 are independent form.
Claim Interpretation Under 35 USC § 112(f)
3. The claim’s interpretation as invoking 35 USC § 112(f) made in the previous action has been removed in view of the applicant’s amendment.
Claim Rejections - 35 USC § 103
4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. Claims 1-8, 10-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable Choi (US 2023/0076017, hereinafter “Choi”).
Regarding claim 1, Choi discloses a monitoring system (the object detection system in a neural network training method; see figs.1-8 and abstract), comprising:
a camera, capturing an original image (see “user terminal 10” in fig.1 and para.38: “a user 5 may capture the user's own face by using a user terminal 10.”); and
a processing device, communicatively connected to the camera and configured to perform (see “service operation server 100” in fig.1 and par.39: “a service operation server 100 may receive a captured face image 20 from the user 5.”): obtaining an image of a monitoring target from the original image (see the face image 20 in fig.1/fig.2);
performing a de-identification processing on the original image to obtain a de- identification image, and outputting the de-identification image (see “de-identification unit 110” and “third image 22” output by 110 in fig.2; see the input face image 20 in fig.4 and the output de-identification face image 22 in fig.5), wherein the de-identification image comprises a part of the original image and the de-identified monitoring target (see para.43: “the service operation server 100 [for generating the de-identification face image] may protect the personal information by de-identifying image including the personal information such as the facial information used as the training data and making it impossible to restore the de-identified image to an original image.” Also see para.81: wherein “the first image 20 may be an image captured in a general way and it is thus possible to identify the object included therein [i.e., the original/first image 20] with the naked eyes. However, the third image 22 [i.e., the de-identification face image] may include information redefined after being encoded, and include data values in a form in which the object is impossible to be identified with the naked eyes.” As shown by the de-identification face image 22 in fig.5, it is apparent that the de-identification process of Choi may only redefine the facial/personal region in the original face image and make the face in the de-identification face image impossible to be identified with the naked eyes.);
performing a first de-identification operation on the image of the monitoring target to generate a de-identification feature (see the face image 20 [Wingdings font/0xE0] the de-identification face image 22 [Wingdings font/0xE0] the objection information 30 in fig.2/fig.4; see para.59: “The de-identification unit 110 may de-identify the face image [20] captured by and transmitted from the user terminal 10.” It should be noticed that the object information 30 include de-identified facial features including “eyes, nose, mouth and the like” which are extracted by the training unit 120. See para.53, lines 8-14).
Choi does not explicitly disclose “determining whether the de-identification feature matches a pre-stored feature in a feature database to generate a verification result” as recited by the claim. However, Choi implicitly discloses these features:
Paragraph [0089], lines 1-4, Choi states “[t]he object information 30 extracted as an output signal for the training may be compared with a target value of the actual image to calculate an error, and the calculated error may be propagated in the neural network in a reverse direction according to a backpropagation algorithm.”
Paragraph [0044], Choi states “The service operation server 100 according to this embodiment may de-identify the image received from the user terminal 10 and store the de-identified image as the training data in the database 200 to be used for the training of the neural network.”
Paragraph [0053], lines 8-14, Choi states “the neural network [120 of fig.2] may extract eyes, nose, mouth and the like, included in the facial information, from the image, and may be trained to extract, as the object information 30, information determined to be necessary to identify whose facial image these extracted images belong to, such as their shapes, relative positional relationship and the like.”
Paragraph [0041], lines 1-5, Choi states: “Iterative training may be required in order to the neural network used for the user authentication to determine the identity of the user through the image.”
According to the teachings above, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was made to know that “the target value of the actual image” which is used to be compared with the object information 30 and determine the identity (i.e., the object information) of the face image 20 or 22 are de-identified facial features including the eyes, the nose, the mouth and the like pre-extracted from face images and pre-stored in the database 200.and the like pre-extracted from face images and pre-stored in the database 200. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was made to combine all the teachings of Choi into the monitoring system claimed by claim 1. Suggestion or motivation for doing so would have been to “perform the user authentication by using a neural network and perform a faster and more accurate user verification procedure than a conventional image processing method by using the neural network” as taught by Choi, cf., Par.40. Therefore, the claim is unpatentable over Choi.
Regarding claim 2, 12, Choi discloses, wherein the processing device is further configured to perform: performing a second de-identification operation on the facial image to generate a de-identification label, and establishing a mapping relationship between the de-identification label and the de-identification image to establish or update an image database (see para.41: “Iterative training may be required in order to the neural network used for the user authentication to determine the identity of the user through the image, and the service operation server 100 may thus store various training data in a separate database 200.” See para.41: “The service operation server 100 according to this embodiment may de-identify the image received from the user terminal 10 and store the de-identified image as the training data in the database 200 to be used for the training of the neural network.” In other words, the identity/label of the user and the de-identified face image of the user are associated each to other and stored in the database).
Regarding claim 3, 13, Choi discloses, wherein the second de-identification operation is the same as the first de-identification operation (ibid.).
Regarding claim 4, 14, Choi discloses, wherein the second de-identification operation is different from the first de-identification operation, wherein the processing device performs the first de-identification operation based on a differential privacy algorithm (see the de-identification 110 of fig.2) and performs the second de-identification operation based on a homomorphic encryption algorithm (see the training unit 120 of fig.2).
Regarding claim 5, 15, Choi discloses, wherein the de-identification processing on the image comprises: covering the monitoring target in the image using a deep learning model to generate the de-identification image (see the de-identification encoder-decoder NN 110 of fig.2).
Regarding claim 6, 16, Choi discloses, wherein the processing device is further configured to perform: capturing the facial image from the image using the deep learning model (see the de-identification encoder-decoder NN 110 of fig.2).
Regarding claim 7, 17, Choi discloses, wherein the deep learning model comprises a deep neural network (see the de-identification encoder-decoder NN 110 of fig.2).
Regarding claim 8, 19, Choi discloses, wherein the processing device is further configured to perform: performing a second de-identification operation on the facial image to generate a de-identification label; and querying an image database according to the de-identification label to obtain a historical de-identification image corresponding to the de-identification label (see para.53, lines 8-14, “the neural network [120 of fig.2] may extract [de-identified facial features including] eyes, nose, mouth and the like, included in the facial information.”)
Regarding claim 10, Choi discloses the monitoring system according to claim 8, wherein the processing device is further configured to perform: determining whether the verification result is successful; and in response to the verification result being successful, querying the image database according to the de-identification label to obtain the historical de-identification image corresponding to the de-identification label (see para.89], lines 1-4, “[t]he object information 30 extracted as an output signal for the training may be compared with a target value of the actual image to calculate an error, and the calculated error may be propagated in the neural network in a reverse direction according to a backpropagation algorithm.” It should be noticed that, when error=0, the matched, the verification will be 100% successful. Therefore, the claimed invention is an obvious variation of Choi.).
Regarding claim 11, claim 11 is an inherent variation of claim 1, thus it is interpreted and rejected for the reasons set forth in the rejection of claim 1.
Regarding claim 20, claim 20 essentially is a combination of claim 1 and claim 8, thus it is interpreted and rejected for the reasons set forth in the rejections of claims 1 and 8.
7. Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable Choi in view of Zadeh (US 8311973, hereinafter “Zadeh”).
Regarding claim 9, 19, Choi discloses the claimed invention except for “performing a fuzzy search on the image database”. However, in the same field of endeavor, Zadeh teaches, “search[ing] database” and “looking for degree of similarity, e.g. as a fuzzy parameter” “for face or iris recognition”. See col.90, lines 53-67. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was made to incorporate the teachings of Zadeh into the teachings of Choi and perform “a fuzzy search on the image database” taught by Zadeh according to the de-identification label to obtain the historical de-identification image taught by Choi. Suggestion or motivation for doing so would have been to perform “face or iris recognition” as taught by Zadeh, cf., col.90, lines 53-67. Therefore, the claim is unpatentable over Choi in view of Zadeh.
Response to Arguments
8. Applicant’s arguments, with respects to claim 1, filed on 02/05/2026, have been fully considered but they are not persuasive.
On page 9 of applicant’s response, applicant argues:
Choi does not teach or suggest that the third image 22 includes both a part of the original face image and the de-identified face image. That is, Choi fails to disclose "performing a de- identification processing on the original image to obtain a de-identification image, and outputting the de-identification image, wherein the de-identification image comprises a part of the original image and the de-identified monitoring target" recited in amended claim 1 of the present application.
The examiner respectfully disagrees with the applicant’s arguments for at least the following reasons. As shown by the de-identification face image 22 in fig.5 of Choi, wherein mere the subject region is redefined in order to protect the identity of the subject while the background may not be modified necessarily. In fact, Paragraph [0043], Choi states “the service operation server 100 [for generating the de-identification face image 22] may protect the personal information by de-identifying image including the personal information such as the facial information used as the training data and making it impossible to restore the de-identified image to an original image.” Paragraph [0081], Choi states wherein “the first image 20 may be an image captured in a general way and it is thus possible to identify the object included therein [i.e., the original/first face image 20] with the naked eyes. However, the third image 22 [i.e., the de-identification face image 22] may include information redefined after being encoded, and include data values in a form in which the object is impossible to be identified with the naked eyes.” Therefore, it is apparent that the de-identification process of Choi may only redefine the face region in the original face image and make the face in the de-identification face image impossible to be identified with the naked eyes. The argument is not persuasive
Conclusion
9. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RUIPING LI whose telephone number is (571)270-3376. The examiner can normally be reached 8:30am--5:30pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, HENOK SHIFERAW can be reached on (571)272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit https://patentcenter.uspto.gov; https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center, and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RUIPING LI/Primary Examiner, Ph.D., Art Unit 2676