Prosecution Insights
Last updated: April 19, 2026
Application No. 18/591,872

IMAGE REPAIR OF CAPTURED DOCUMENT IMAGES FOR DOCUMENT IMAGE SUBMISSIONS USING A GENERATIVE ARTIFICIAL INTELLIGENCE

Non-Final OA §103§DP
Filed
Feb 29, 2024
Examiner
WOLFSON, ETHAN NOAH
Art Unit
2673
Tech Center
2600 — Communications
Assignee
Paypal Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
15 currently pending
Career history
15
Total Applications
across all art units

Statute-Specific Performance

§101
14.3%
-25.7% vs TC avg
§103
51.4%
+11.4% vs TC avg
§102
20.0%
-20.0% vs TC avg
§112
8.6%
-31.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification Objections The specification is objected to because of the following informalities: In paragraph [00030], line 8, “which may including providing…” should read “which may include providing…” In paragraph [00065], line 3, “first goes to generate A 312…” should read “first goes to generator A 312…” In paragraph [00068], line 11, “a portion is unreadable Thus, in original…” should read “a portion is unreadable. Thus, in original…” In paragraph [00073], line 6, “for each now image…” should read “for each new image …” In paragraph [00074], line 9, “for each now image…” should read “for each new image …” In paragraph [00076], line 2, “the generate that…” should read “the generator that…” Appropriate correction is required. Double Patenting The non-statutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A non-statutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on non-statutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claim 1 is rejected on the ground of non-statutory double patenting as being unpatentable over claim 1 of Co-pending Application No. 18/755,424 in view of NEPOMNIACHTCHI et al. (US 20200311407 A1). Claim 10 is rejected on the ground of non-statutory double patenting as being unpatentable over claim 10 of Co-pending Application No. 18/755,424 in view of NEPOMNIACHTCHI et al. (US 20200311407 A1) and further in view of COWAN (US 20230010164 A1). Although the claims 1 and 10 of this Application No. 18/591,872 and claims at issue are not identical, they are not patentably distinct from each other because the instant application and the conflicting co-pending application are claiming common subject matter, as follows: This Application No. 18/591,872 Co-pending Application No. 18/755,424 Claim 1: A system comprising: a non-transitory memory; and one or more hardware processors coupled to the non-transitory memory and configured to read instructions from the non-transitory memory to cause the system to perform operations comprising: accessing a first image of a document for a user that is submitted for a document verification process of the document; executing an image processing neural network framework comprising at least a generative artificial intelligence (AI) model having a first neural network generator that generates high-quality images and low-quality images from real image training data and a second neural network discriminator that distinguishes between the high-quality images and the low-quality images; executing an action based on the repaired image data for the document verification process. generating repaired image data for the first image that alters at least one field in the first image to improve an image quality of the at least one field, wherein the image quality in the first image of the at least one field limits a data extraction of corresponding document data on the document used for the document verification process, and wherein the repaired image data comprises image data for the at least one field that enables the data extraction of the corresponding document data; providing the repaired image data with the first image; Claim 10: A method comprising: determining that the first image is capable of being repaired using an image processing neural network (NN) framework comprising at least a generative artificial intelligence (AI) having generator NNs that generate high-quality images and low-quality images from real image training data and discriminator NNs that distinguish between the high-quality images and the low-quality images, generating a repaired image of the first image that alters the at least one field in the first image that improves the image quality for a data extraction of corresponding document data on the document used for the document verification process; and detecting that a first image of a document for a user that is submitted for a document verification process of the document has failed due to an image quality of at least one field in the first image; and wherein a first pair of the generator NNs and discriminator NNs is trained using reverse training with the real image training data to generate the low-quality images from the high-quality images; outputting the repaired image with the first image in the document verification process. Claim 1: A system comprising: a non-transitory memory; and one or more hardware processors coupled to the non-transitory memory and configured to execute instructions to cause the system to: receive a document for a user that is submitted for a document verification of the document; execute a decision engine for document forgery detection that comprises a generative artificial intelligence (AI) model trained for fake document generation and a machine learning (ML) model trained for fake document identification, wherein the generative AI model includes a generative adversarial network (GAN) that generates fake documents and distinguishes between the fake documents and real documents for the document verification; (wherein GAN is a neural network framework) score, using the decision engine, similarities of the document to a plurality of preselected documents for the document forgery detection, wherein the plurality of preselected documents are associated with known document formats used for the document verification of documents; determine, using the decision engine, whether to flag the document as a potentially forged document based on the scored similarities; and execute a decision on the document verification based on whether the document is flagged as the potentially forged document. Claim 10: A method comprising: receiving document training data for a generative artificial intelligence (AI) that generates fake documents from legitimate documents; training a generator neural network (NN) and a discriminator NN using the document training data, wherein the generator NN generates the fake documents from the legitimate documents and document features identified in the legitimate documents, and wherein the discriminator NN provides feedback identifying whether each of the fake documents appears real or generated; generating, using the generator NN of the generative AI after the training, additional fake documents for a machine learning (ML) model that performs fake document identification; training the ML model using at least the additional fake documents; and implementing the ML model with a decision engine for computations of document authenticity scores utilized for decisions on document forgery, wherein the computations are based on similarity scores between input documents and challenger documents including at least the additional fake documents Although, Co-pending Application No. 18/755,424 claim 1 teaches a system comprising: a non-transitory memory; and one or more hardware processors coupled to the non-transitory memory and configured to read instructions from the non-transitory memory to cause the system to perform operations comprising: accessing a first image of a document for a user that is submitted for a document verification process of the document; executing an image processing neural network framework comprising at least a generative artificial intelligence (AI) model having a first neural network generator that generates high-quality images and low-quality images from real image training data and a second neural network discriminator that distinguishes between the high-quality images and the low-quality images; executing an action based on the repaired image data for the document verification process Co-pending Application No. 18/755,424 claim 1, as stated in the table above with respect to claim 1, fails to clearly disclose generating repaired image data for the first image that alters at least one field in the first image to improve an image quality of the at least one field, wherein the image quality in the first image of the at least one field limits a data extraction of corresponding document data on the document used for the document verification process, and wherein the repaired image data comprises image data for the at least one field that enables the data extraction of the corresponding document data; providing the repaired image data with the first image. However, NEPOMNIACHTCHI et al. (US 20200311407 A1), explicitly teaches generating repaired image data for the first image that alters at least one field in the first image to improve an image quality of the at least one field (Fig. 1. Paragraph [0075]- NEPOMNIACHTCHI when an image of a driver's license (DL) is captured using the mobile device, one or more image correction steps are performed to improve the quality of the image and ensure that the content of the DL can be extracted.), wherein the image quality in the first image of the at least one field limits a data extraction of corresponding document data on the document used for the document verification process (Fig. 29A-B. Paragraph [0296]-NEPOMNIACHTCHI discloses an Image Focus IQA Test can be executed on a mobile image to determine whether the image is too blurry to be used by a mobile application. Blurry images are often unusable, and this test can help to identify such out-of-focus images and reject them.), and wherein the repaired image data comprises image data for the at least one field that enables the data extraction of the corresponding document data (Fig. 9. Paragraph [0128]-NEPOMNIACHTCHI discloses once the image is captured and corrected, and the data is extracted and adjusted, then the image, data, and any required credential information, such as username, password, and phone or device identifier, can be transmitted to the remote server for further processing.); providing the repaired image data with the first image (Fig. 24. Paragraph [0248]-NEPOMNIACHTCHI discloses after receiving a bi-tonal image containing a remittance coupon that is orientated right-side-up at operation 1802, the codeline at the bottom of the remittance coupon is read at operation 1804. Further in Paragraph [0259]-NEPOMNIACHTCHI discloses operation 1812 outputs the resulting bi-tonal image of the remittance coupon and gray-scale image of the remittance coupon (wherein the gray-scale image is the repaired image).); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Co-pending Application No. 18/755,424 claim 1 of a system comprising: a non-transitory memory; and one or more hardware processors coupled to the non-transitory memory and configured to read instructions from the non-transitory memory to cause the system to perform operations comprising: accessing a first image of a document for a user that is submitted for a document verification process of the document; executing an image processing neural network framework comprising at least a generative artificial intelligence (AI) model having a first neural network generator that generates high-quality images and low-quality images from real image training data and a second neural network discriminator that distinguishes between the high-quality images and the low-quality images; executing an action based on the repaired image data for the document verification process with the teachings of NEPOMNIACHTCHI et al. (US 20200311407 A1) of generating repaired image data for the first image that alters at least one field in the first image to improve an image quality of the at least one field, wherein the image quality in the first image of the at least one field limits a data extraction of corresponding document data on the document used for the document verification process, and wherein the repaired image data comprises image data for the at least one field that enables the data extraction of the corresponding document data; providing the repaired image data with the first image. Wherein having Co-pending Application No. 18/755,424 claim 1 generating repaired image data for the first image that alters at least one field in the first image to improve an image quality of the at least one field, wherein the image quality in the first image of the at least one field limits a data extraction of corresponding document data on the document used for the document verification process, and wherein the repaired image data comprises image data for the at least one field that enables the data extraction of the corresponding document data; providing the repaired image data with the first image. The motivation behind the modification would have been to obtain an image repair system that improves the quality of the image and ensures that the content of the DL can be extracted. Although, Co-pending Application No. 18/755,424 claim 10 teaches a method comprising: determining that the first image is capable of being repaired using an image processing neural network (NN) framework comprising at least a generative artificial intelligence (AI) having generator NNs that generate high-quality images and low-quality images from real image training data and discriminator NNs that distinguish between the high-quality images and the low-quality images, generating a repaired image of the first image that alters the at least one field in the first image that improves the image quality for a data extraction of corresponding document data on the document used for the document verification process; and. Co-pending Application No. 18/755,424 claim 10, as stated in the table above with respect to claim 10, fails to clearly disclose detecting that a first image of a document for a user that is submitted for a document verification process of the document has failed due to an image quality of at least one field in the first image; outputting the repaired image with the first image in the document verification process. However, NEPOMNIACHTCHI et al. (US 20200311407 A1), explicitly teaches detecting that a first image of a document for a user that is submitted for a document verification process of the document has failed due to an image quality of at least one field in the first image (Fig. 29A-B. Paragraph [0296]- NEPOMNIACHTCHI discloses an Image Focus IQA Test can be executed on a mobile image to determine whether the image is too blurry to be used by a mobile application. Blurry images are often unusable, and this test can help to identify such out-of-focus images and reject them. The user can be provided detailed information to assist the user in taking a better quality image of the document.); outputting the repaired image with the first image in the document verification process (Fig. 24. Paragraph [0248]-NEPOMNIACHTCHI discloses after receiving a bi-tonal image containing a remittance coupon that is orientated right-side-up at operation 1802, the codeline at the bottom of the remittance coupon is read at operation 1804. Further in Paragraph [0259]-NEPOMNIACHTCHI discloses operation 1812 outputs the resulting bi-tonal image of the remittance coupon and gray-scale image of the remittance coupon (wherein the gray-scale image is the repaired image).) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Co-pending Application No. 18/755,424 claim 10 of a method comprising: determining that the first image is capable of being repaired using an image processing neural network (NN) framework comprising at least a generative artificial intelligence (AI) having generator NNs that generate high-quality images and low-quality images from real image training data and discriminator NNs that distinguish between the high-quality images and the low-quality images, generating a repaired image of the first image that alters the at least one field in the first image that improves the image quality for a data extraction of corresponding document data on the document used for the document verification process; and wherein a first pair of the generator NNs and discriminator NNs is trained using reverse training with the real image training data to generate the low-quality images from the high-quality images with the teachings of NEPOMNIACHTCHI et al. (US 20200311407 A1) of detecting that a first image of a document for a user that is submitted for a document verification process of the document has failed due to an image quality of at least one field in the first image; outputting the repaired image with the first image in the document verification process. Wherein having Co-pending Application No. 18/755,424 claim 10 detecting that a first image of a document for a user that is submitted for a document verification process of the document has failed due to an image quality of at least one field in the first image; outputting the repaired image with the first image in the document verification process. The motivation behind the modification would have been to obtain an image repair system that improves the quality of the image and ensures that the content of the DL can be extracted. Co-pending Application No. 18/755,424 in view of NEPOMNIACHTCHI et al. (US 20200311407 A1) fail to explicitly teach and wherein a first pair of the generator NNs and discriminator NNs is trained using reverse training with the real image training data to generate the low-quality images from the high-quality images. However, COWAN (US 20230010164 A1) explicitly teaches and wherein a first pair of the generator NNs and discriminator NNs is trained using reverse training with the real image training data to generate the low-quality images from the high-quality images (Fig. 2. Paragraph [0064]-COWAN discloses the discriminator input data generator 240 can generate a first instance of the discriminator input data 245 by combining (e.g., by stacking) the high-resolution synthesized image 235 outputted by the generator neural network with the generator training input data (i.e., the low-resolution training image 210a and/or the reference training image 210b). The discriminator input data generator 240 can also generate a second instance of the discriminator input data 245 by combining (e.g., by stacking) the high-resolution training image 210c with the generator training input data (i.e., the low-resolution training image 210a and the reference training image 210b).). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Co-pending Application No. 18/755,424 claim 10 in view of NEPOMNIACHTCHI et al. (US 20200311407 A1) of a method comprising: determining that the first image is capable of being repaired using an image processing neural network (NN) framework comprising at least a generative artificial intelligence (AI) having generator NNs that generate high-quality images and low-quality images from real image training data and discriminator NNs that distinguish between the high-quality images and the low-quality images, generating a repaired image of the first image that alters the at least one field in the first image that improves the image quality for a data extraction of corresponding document data on the document used for the document verification process; and wherein a first pair of the generator NNs and discriminator NNs is trained using reverse training with the real image training data to generate the low-quality images from the high-quality images with the teachings of COWAN (US 20230010164 A1) of and wherein a first pair of the generator NNs and discriminator NNs is trained using reverse training with the real image training data to generate the low-quality images from the high-quality images. Wherein having Co-pending Application No. 18/755,424 claim 10 and wherein a first pair of the generator NNs and discriminator NNs is trained using reverse training with the real image training data to generate the low-quality images from the high-quality images. The motivation behind the modification would have been to obtain an image repair system with an improved performance and/or training efficacy of a GAN model. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1, 5, 7, and 9 are rejected under 35 U.S.C. 103 as being unpatentable over NEPOMNIACHTCHI et al. (US 20200311407 A1), hereinafter referenced as NEPOMNIACHTCHI, in view of COWAN (US 20230010164 A1), hereinafter referenced as COWAN. Regarding claim 1, NEPOMNIACHTCHI explicitly teaches a system comprising (Fig. 1. Paragraph [0078]-NEPOMNIACHTCHI discloses FIG. 1 illustrates one embodiment of a system 100 for capturing an image of a driver's license (DL) with a mobile device and processing the image to extract content.): a non-transitory memory (Fig. 50. #4420 called memory. Paragraph [0416]- NEPOMNIACHTCHI discloses the memory 4420 can comprise volatile memory, such as RAM and/or persistent memory, such as flash memory.); and one or more hardware processors (Fig. 50. #4410 called a processor) coupled to the non-transitory memory and configured to read instructions from the non-transitory memory to cause the system to perform operations comprising (Paragraph [0416]-NEPOMNIACHTCHI discloses the processor 4410 can be a microprocessor or the like that is configurable to execute program instructions stored in the memory 4420 and/or the data storage 4440.): accessing a first image of a document for a user that is submitted for a document verification process of the document (Fig. 1. Paragraph [0078]-NEPOMNIACHTCHI discloses FIG. 1 illustrates one embodiment of a system 100 for capturing an image of a driver's license (DL) with a mobile device and processing the image to extract content. Further in Paragraph [0083]-NEPOMNIACHTCHI discloses the third party server 110 can be configured to utilize content from a DL, for example, by taking the information extracted from the images and matching it with information obtained about the person or submitting the information as verification of the person's identity.); generating repaired image data for the first image that alters at least one field in the first image to improve an image quality of the at least one field (Fig. 1. Paragraph [0075]- NEPOMNIACHTCHI when an image of a driver's license (DL) is captured using the mobile device, one or more image correction steps are performed to improve the quality of the image and ensure that the content of the DL can be extracted.), wherein the image quality in the first image of the at least one field limits a data extraction of corresponding document data on the document used for the document verification process (Fig. 29A-B. Paragraph [0296]-NEPOMNIACHTCHI discloses an Image Focus IQA Test can be executed on a mobile image to determine whether the image is too blurry to be used by a mobile application. Blurry images are often unusable, and this test can help to identify such out-of-focus images and reject them.), and wherein the repaired image data comprises image data for the at least one field that enables the data extraction of the corresponding document data (Fig. 9. Paragraph [0128]-NEPOMNIACHTCHI discloses once the image is captured and corrected, and the data is extracted and adjusted, then the image, data, and any required credential information, such as username, password, and phone or device identifier, can be transmitted to the remote server for further processing.); providing the repaired image data with the first image (Fig. 24. Paragraph [0248]-NEPOMNIACHTCHI discloses after receiving a bi-tonal image containing a remittance coupon that is orientated right-side-up at operation 1802, the codeline at the bottom of the remittance coupon is read at operation 1804. Further in Paragraph [0259]-NEPOMNIACHTCHI discloses operation 1812 outputs the resulting bi-tonal image of the remittance coupon and gray-scale image of the remittance coupon (wherein the gray-scale image is the repaired image).); and executing an action based on the repaired image data for the document verification process (Paragraph [0155]- NEPOMNIACHTCHI discloses the mobile device can be configured to send a corrected mobile image to the remote server for further processing.). NEPOMNIACHTCHI fails to explicitly teach executing an image processing neural network framework comprising at least a generative artificial intelligence (AI) model having a first neural network generator that generates high-quality images and low-quality images from real image training data and a second neural network discriminator that distinguishes between the high-quality images and the low-quality images. However, COWAN explicitly teaches executing an image processing neural network framework comprising at least a generative artificial intelligence (AI) model having a first neural network generator (Fig. 1. #121a called a generator. Paragraph [0044]) that generates high-quality images and low-quality images from real image training data (Fig. 2. Paragraph [0064]-COWAN discloses the discriminator input data generator 240 can generate a first instance of the discriminator input data 245 by combining (e.g., by stacking) the high-resolution synthesized image 235 outputted by the generator neural network with the generator training input data (i.e., the low-resolution training image 210a and/or the reference training image 210b). The discriminator input data generator 240 can also generate a second instance of the discriminator input data 245 by combining (e.g., by stacking) the high-resolution training image 210c with the generator training input data (i.e., the low-resolution training image 210a and the reference training image 210b) (wherein an image synthesized with a low-resolution training image is a low-quality image).) and a second neural network discriminator (Fig. 1. #121b called a discriminator. Paragraph [0044]) that distinguishes between the high-quality images and the low-quality images (Fig. 2. Paragraph [0065]-COWAN discloses the training engine uses the discriminator neural network 250 to process the first and the second instances of the discriminator input data 245, respectively, and generates a prediction 255 to distinguish between the high-resolution synthesized image 235 and the high-resolution training image 210 (the “real” image) included in the discriminator input data 245 (wherein an image synthesized with a low-resolution training image is a low-quality image).); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of NEPOMNIACHTCHI of a system comprising: a non-transitory memory; and one or more hardware processors coupled to the non-transitory memory and configured to read instructions from the non-transitory memory to cause the system to perform operations comprising: accessing a first image of a document for a user that is submitted for a document verification process of the document; generating repaired image data for the first image that alters at least one field in the first image to improve an image quality of the at least one field, with the teachings of COWAN of executing an image processing neural network framework comprising at least a generative artificial intelligence (AI) model having a first neural network generator that generates high-quality images and low-quality images from real image training data and a second neural network discriminator that distinguishes between the high-quality images and the low-quality images. Wherein having NEPOMNIACHTCHI’s system of document verification executing an image processing neural network framework comprising at least a generative artificial intelligence (AI) model having a first neural network generator that generates high-quality images and low-quality images from real image training data and a second neural network discriminator that distinguishes between the high-quality images and the low-quality images. The motivation behind the modification would have been to obtain a document verification system that enhances the efficiency and accuracy of the verification process. Since both NEPOMNIACHTCHI and COWAN relate to processing images received via a network, wherein NEPOMNIACHTCHI is it to improve the quality of the image and ensure that the content of the DL can be extracted, while COWAN it is for improving the performance and/or training efficacy of a GAN model. Please see NEPOMNIACHTCHI et al. (US 20200311407 A1), Paragraph [0075], and COWAN (US 20230010164 A1), Paragraph [0004]. Regarding claim 5, NEPOMNIACHTCHI in view of COWAN explicitly teach the system of claim 1, NEPOMNIACHTCHI further explicitly teaches wherein, prior to the accessing the first image, the operations further comprise (Paragraph [0078]-NEPOMNIACHTCHI discloses the remote server 104 may send information to the mobile device 102 (and specifically an application running on the mobile device) regarding the parameters that should be measured and the values of the thresholds required to capture an image of a DL (wherein the server sends information before capturing image).): identifying that the first image is at or below a threshold image quality to qualify as a low-quality image (Fig. 29A-B. Paragraph [0296]- NEPOMNIACHTCHI discloses an Image Focus IQA Test can be executed on a mobile image to determine whether the image is too blurry to be used by a mobile application. Blurry images are often unusable, and this test can help to identify such out-of-focus images and reject them.), wherein the accessing is performed in response to the identifying (Fig. 29A-B. Paragraph [0296]- NEPOMNIACHTCHI discloses an Image Focus IQA Test can be executed on a mobile image to determine whether the image is too blurry to be used by a mobile application. Blurry images are often unusable, and this test can help to identify such out-of-focus images and reject them. The user can be provided detailed information to assist the user in taking a better quality image of the document. For example, the blurriness may have been the result of motion blur caused by the user moving the camera while taking the image. The test result messages can suggest that the user hold the camera steadier when retaking the image.) Regarding claim 7, NEPOMNIACHTCHI in view of COWAN explicitly teach the system of claim 1, NEPOMNIACHTCHI further explicitly teaches wherein the providing the repaired image data with the first image is to a document verification platform, and wherein the operations further comprise (Fig. 1. Paragraph [0083]-NEPOMNIACHTCHI discloses the third party server 110 can be configured to utilize content from a DL, for example, by taking the information extracted from the images and matching it with information obtained about the person or submitting the information as verification of the person's identity.): processing a verification of the document for the user in the document verification platform using the document verification process, the repaired image data, and the first image (Fig. 1. Paragraph [0079]- NEPOMNIACHTCHI the remote server 104 may be connected with a DL format database 106 which stores format information on known types of DLs used to identify a DL in a captured image, as will be described in further detail below. Once the mobile device 102 or remote server 104 has extracted and identified all of the relevant data from the image of the DL, the extracted data and the captured and processed images may be stored in a content database 108 connected with the mobile device 102 or remote server 104. The extracted data may then be transmitted to a third party server 110 which will use the content from the DL for any one of many different applications (wherein the repaired image data is the extracted data).). Regarding claim 9, NEPOMNIACHTCHI in view of COWAN explicitly teach the system of claim 1, NEPOMNIACHTCHI further explicitly teaches wherein the executing the action comprises one of submitting the first image with the repaired image data to the document verification process for processing (Fig. 1. Paragraph [0083]-NEPOMNIACHTCHI discloses the third party server 110 can be configured to utilize content from a DL, for example, by taking the information extracted from the images and matching it with information obtained about the person or submitting the information as verification of the person's identity. Further in Paragraph [0128]-NEPOMNIACHTCHI discloses once the image is captured and corrected, and the data is extracted and adjusted, then the image, data, and any required credential information, such as username, password, and phone or device identifier, can be transmitted to the remote server for further processing.), performing an optical character recognition (OCR) process on the first image with the repaired image data for the data extraction (Fig. 1. Paragraph [0120]- NEPOMNIACHTCHI discloses once the binarized image is produced, it may be outputted for processing via optical character recognition (OCR) or other related processes which will detect and extract text and other characters from the image of the DL. As a result of the processing steps described above, the image of the DL in the outputted binarized image will provide a high confidence level extraction for OCR. The content of the DL can therefore be quickly and accurately obtained even from a mobile image of the DL.), or transmitting a request for the user to resubmit the document in a second image (Fig. 29A-B. Paragraph [0296]- NEPOMNIACHTCHI discloses an Image Focus IQA Test can be executed on a mobile image to determine whether the image is too blurry to be used by a mobile application. Blurry images are often unusable, and this test can help to identify such out-of-focus images and reject them. The user can be provided detailed information to assist the user in taking a better quality image of the document. For example, the blurriness may have been the result of motion blur caused by the user moving the camera while taking the image. The test result messages can suggest that the user hold the camera steadier when retaking the image.). Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over NEPOMNIACHTCHI et al. (US 20200311407 A1), hereinafter referenced as NEPOMNIACHTCHI, in view of COWAN (US 20230010164 A1), hereinafter referenced as COWAN, and further in view of SASAGAWA (US 20210248463 A1), hereinafter referenced as SASAGAWA. Regarding claim 2, NEPOMNIACHTCHI in view of COWAN explicitly teach the system of claim 1, NEPOMNIACHTCHI explicitly teaches wherein, prior to the accessing, the operations further comprise (Paragraph [0078]-NEPOMNIACHTCHI discloses the remote server 104 may send information to the mobile device 102 (and specifically an application running on the mobile device) regarding the parameters that should be measured and the values of the thresholds required to capture an image of a DL (wherein the server sends information before capturing image).): NEPOMNIACHTCHI fails to explicitly teach training the first neural network generator and the second neural network discriminator using training data comprising an image data set having image pairs of high-quality images with low-quality images, and wherein the training the first neural network generator and the second neural network discriminator comprises: training the first neural network generator and the second neural network discriminator using the image training data, wherein the first neural network generator comprises a first generator and a second generator, and However, COWAN explicitly teaches training the first neural network generator and the second neural network discriminator using training data comprising an image data set having image pairs of high-quality images with low-quality images (Fig. 1. Paragraph [0028]-COWAN discloses the system 120 can obtain a plurality of training examples 110, and processes the training examples 110 using a training engine 122 of the system to update network parameters 124 of a machine-learning model 121. Each training example can include a low-resolution training image 110a of an area, a reference training image 110b of the same area, and a high-resolution training image 110c of the same area.), and wherein the training the first neural network generator and the second neural network discriminator comprises (Fig. 2. Paragraph [0047]-COWAN discloses in the GAN configuration, the generator neural network 121a is trained together with the discriminator neural network 121b based on a plurality of training examples.): training the first neural network generator and the second neural network discriminator using the image training data (Fig. 2. Paragraph [0047]-COWAN discloses in the GAN configuration, the generator neural network 121a is trained together with the discriminator neural network 121b based on a plurality of training examples.), wherein the first neural network generator comprises a first generator and a second generator ([0061]-COWAN discloses the training engine includes a generator input data generator 220 that generates training input data 225 for the generator neural network (wherein the first generator is the input data generator and the second generator is the generator neural network).), and Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of NEPOMNIACHTCHI of a system comprising: a non-transitory memory; and one or more hardware processors coupled to the non-transitory memory and configured to read instructions from the non-transitory memory to cause the system to perform operations comprising: accessing a first image of a document for a user that is submitted for a document verification process of the document; generating repaired image data for the first image that alters at least one field in the first image to improve an image quality of the at least one field, with the teachings of COWAN of training the first neural network generator and the second neural network discriminator using training data comprising an image data set having image pairs of high-quality images with low-quality images, and wherein the training the first neural network generator and the second neural network discriminator comprises: training the first neural network generator and the second neural network discriminator using the image training data, wherein the first neural network generator comprises a first generator and a second generator. Wherein having NEPOMNIACHTCHI’s system of document verification training the first neural network generator and the second neural network discriminator using training data comprising an image data set having image pairs of high-quality images with low-quality images, and wherein the training the first neural network generator and the second neural network discriminator comprises: training the first neural network generator and the second neural network discriminator using the image training data, wherein the first neural network generator comprises a first generator and a second generator. The motivation behind the modification would have been to obtain a document verification system that enhances the efficiency and accuracy of the verification process. Since both NEPOMNIACHTCHI and COWAN relate to processing images received via a network, wherein NEPOMNIACHTCHI is it to improve the quality of the image and ensure that the content of the DL can be extracted, while COWAN it is for improving the performance and/or training efficacy of a GAN model. Please see NEPOMNIACHTCHI et al. (US 20200311407 A1), Paragraph [0075], and COWAN (US 20230010164 A1), Paragraph [0004]. NEPOMNIACHTCHI in view of COWAN fail to explicitly teach wherein the second neural network discriminator comprises a first discriminator and a second discriminator. However, SASAGAWA explicitly teaches wherein the second neural network discriminator comprises a first discriminator and a second discriminator (Fig. 5. Paragraph [0057]-SASAGAWA discloses the branch within the broken line indicated by (c) in FIG. 5 is referred to as branch for pre-quantization model 41a, and the branch within the broken line indicated by (d) in FIG. 5 is referred to as branch for quantized model 41b (wherein the pre-quantization model 41a is the first discriminator and the quantized model 41b is the second discriminator).). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of NEPOMNIACHTCHI in view of COWAN of a system comprising: a non-transitory memory; and one or more hardware processors coupled to the non-transitory memory and configured to read instructions from the non-transitory memory to cause the system to perform operations comprising: accessing a first image of a document for a user that is submitted for a document verification process of the document; generating repaired image data for the first image that alters at least one field in the first image to improve an image quality of the at least one field, with the teachings of SASAGAWA of wherein the second neural network discriminator comprises a first discriminator and a second discriminator. Wherein having NEPOMNIACHTCHI’s system of document verification wherein the second neural network discriminator comprises a first discriminator and a second discriminator. The motivation behind the modification would have been to obtain a document verification system that enhances the efficiency and accuracy of the verification process. Since both NEPOMNIACHTCHI and SASAGAWA relate to processing images received via a network, wherein NEPOMNIACHTCHI is it to improve the quality of the image and ensure that the content of the DL can be extracted, while SASAGAWA makes it possible to derive a neural network having robustness to a variation in parameter or input data of the neural network. Please see NEPOMNIACHTCHI et al. (US 20200311407 A1), Paragraph [0075], and SASAGAWA (US 20210248463 A1), Paragraph [0012]. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over NEPOMNIACHTCHI et al. (US 20200311407 A1), hereinafter referenced as NEPOMNIACHTCHI, in view of COWAN (US 20230010164 A1), hereinafter referenced as COWAN, and further in view of CHEN et al. (DOI: 10.1016/j.isci.2023.107169), hereinafter referenced as CHEN. Regarding claim 6, NEPOMNIACHTCHI in view of COWAN explicitly teach the system of claim 1, NEPOMNIACHTCHI in view of COWAN explicitly teach fail to explicitly teach wherein the providing the repaired image data comprises repairing the first image with the repaired image data to improve the image quality of at least a portion of the first image associated with the at least one field to be of a higher quality in the repaired first image than the first image. However, CHEN further explicitly teaches wherein the providing the repaired image data comprises repairing the first image with the repaired image data to improve the image quality of at least a portion of the first image associated with the at least one field to be of a higher quality in the repaired first image than the first image (Fig. 2. Page 14-15, Lines [30-33] and Lines [1-3]. CHEN discloses TSDRA-GAN consists of a coarse network, a fine network, and a discriminator, where the coarse network is responsible for the coarse repair of iris texture and performs contour repair and coarse repair of iris texture on the original image. The network in this stage calculates the L1 loss with the ground truth (GT), repairs the contour of the area to be repaired and generates the coarse iris texture. Then, the generated image is used as the input of the fine network instead of the original image. The fine network performs texture refinement restoration on the input, and the generated image in this stage is input to the discriminator together with GT to calculate the WGAN-GP loss.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of NEPOMNIACHTCHI in view of COWAN of a system comprising: a non-transitory memory; and one or more hardware processors coupled to the non-transitory memory and configured to read instructions from the non-transitory memory to cause the system to perform operations comprising: accessing a first image of a document for a user that is submitted for a document verification process of the document; generating repaired image data for the first image that alters at least one field in the first image to improve an image quality of the at least one field, with the teachings of CHEN of wherein the providing the repaired image data comprises repairing the first image with the repaired image data to improve the image quality of at least a portion of the first image associated with the at least one field to be of a higher quality in the repaired first image than the first image. Wherein having NEPOMNIACHTCHI’s system of document verification wherein the providing the repaired image data comprises repairing the first image with the repaired image data to improve the image quality of at least a portion of the first image associated with the at least one field to be of a higher quality in the repaired first image than the first image. The motivation behind the modification would have been to obtain a document verification system that enhances the efficiency and accuracy of the verification process. Since both NEPOMNIACHTCHI and CHEN relate to processing images with a network, wherein NEPOMNIACHTCHI is it to improve the quality of the image and ensure that the content of the DL can be extracted, while CHEN it is to enrich iris texture information and improve iris recognition accuracy. Please see NEPOMNIACHTCHI et al. (US 20200311407 A1), Paragraph [0075], and CHEN et al. (DOI: 10.1016/j.isci.2023.107169), Page 1 [Lines 32-42]. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over NEPOMNIACHTCHI et al. (US 20200311407 A1), hereinafter referenced as NEPOMNIACHTCHI, in view of COWAN (US 20230010164 A1), hereinafter referenced as COWAN, and further in view of EDWARDS et al. (US 10685347 B1), hereinafter referenced as EDWARDS. Regarding claim 8, NEPOMNIACHTCHI in view of COWAN explicitly teach the system of claim 7, NEPOMNIACHTCHI in view of COWAN fail to explicitly teach wherein the accessing the first image is performed in response to the verification of the document initially failing the document verification process in the document verification platform. However, EDWARDS explicitly teaches wherein the accessing the first image is performed in response to the verification of the document initially failing the document verification process in the document verification platform (Col. 9. Lines [16-27]-EDWARDS discloses some of the information that the activation platform uses to verify the document, the identity, and/or the image may not match expected information, or may not be present (e.g., in a received image, in metadata, and/or the like). In this case, the activation platform may determine the score to determine a likelihood of an authenticity of the document, the identity, and/or the image based on which information matched or failed to match expected information, and may verify the document, the identity, and/or the image based on whether the score satisfies a threshold, may perform an action based on whether the score satisfies a threshold, and/or the like, as described elsewhere herein.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of NEPOMNIACHTCHI in view of COWAN of a system comprising: a non-transitory memory; and one or more hardware processors coupled to the non-transitory memory and configured to read instructions from the non-transitory memory to cause the system to perform operations comprising: accessing a first image of a document for a user that is submitted for a document verification process of the document; generating repaired image data for the first image that alters at least one field in the first image to improve an image quality of the at least one field, with the teachings of EDWARDS of wherein the accessing the first image is performed in response to the verification of the document initially failing the document verification process in the document verification platform. Wherein having NEPOMNIACHTCHI’s system of document verification wherein the accessing the first image is performed in response to the verification of the document initially failing the document verification process in the document verification platform. The motivation behind the modification would have been to obtain a document verification system that enhances the efficiency and accuracy of the verification process. Since both NEPOMNIACHTCHI and EDWARDS relate to processing images with a network, wherein NEPOMNIACHTCHI is it to improve the quality of the image and ensure that the content of the DL can be extracted, while EDWARDS is reducing or eliminating a risk of a fraudulent activation. Please see NEPOMNIACHTCHI et al. (US 20200311407 A1), Paragraph [0075], and EDWARDS et al. (US 10685347 B1), Col. 3. Lines [29-46]. Claim 10-12, 14-15, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over NEPOMNIACHTCHI et al. (US 20200311407 A1), hereinafter referenced as NEPOMNIACHTCHI, in view of CHEN et al. (DOI: 10.1016/j.isci.2023.107169), hereinafter referenced as CHEN, and further in view of COWAN (US 20230010164 A1), hereinafter referenced as COWAN. Regarding claim 10, NEPOMNIACHTCHI explicitly teaches a method comprising (Fig. 2. Paragraph [0086]-NEPOMNIACHTCHI discloses FIG. 2 illustrates a method submitting an insurance claim using images captured by the mobile device, in accordance with one embodiment of the invention.): detecting that a first image of a document for a user that is submitted for a document verification process of the document has failed due to an image quality of at least one field in the first image (Fig. 29A-B. Paragraph [0296]- NEPOMNIACHTCHI discloses an Image Focus IQA Test can be executed on a mobile image to determine whether the image is too blurry to be used by a mobile application. Blurry images are often unusable, and this test can help to identify such out-of-focus images and reject them. The user can be provided detailed information to assist the user in taking a better quality image of the document.); generating a repaired image of the first image that alters the at least one field in the first image that improves the image quality for a data extraction of corresponding document data on the document used for the document verification process (Fig. 1. Paragraph [0075]- NEPOMNIACHTCHI when an image of a driver's license (DL) is captured using the mobile device, one or more image correction steps are performed to improve the quality of the image and ensure that the content of the DL can be extracted.); and outputting the repaired image with the first image in the document verification process (Fig. 24. Paragraph [0248]-NEPOMNIACHTCHI discloses after receiving a bi-tonal image containing a remittance coupon that is orientated right-side-up at operation 1802, the codeline at the bottom of the remittance coupon is read at operation 1804. Further in Paragraph [0259]-NEPOMNIACHTCHI discloses operation 1812 outputs the resulting bi-tonal image of the remittance coupon and gray-scale image of the remittance coupon (wherein the gray-scale image is the repaired image).). NEPOMNIACHTCHI fails to explicitly teach determining that the first image is capable of being repaired using an image processing neural network (NN) framework comprising at least a generative artificial intelligence (AI) having generator NNs that generate high-quality images and low-quality images from real image training data and discriminator NNs that distinguish between the high-quality images and the low-quality images. However, CHEN explicitly teaches determining that the first image is capable of being repaired (Fig. 1-2. Page 16, Lines [16-18]-CHEN discloses first, the network must effectively learn the location information of the region to be repaired; second, the network must effectively learn the feature information of the complete region and use it to repair the missing region.) using an image processing neural network (NN) framework comprising at least a generative artificial intelligence (AI) having generator NNs that generate high-quality images and low-quality images from real image training data (Fig. 2. Page 2, Lines [25-27]-CHEN discloses a two-stage deep residual attention-GAN (TSDRA-GAN), with a coarse repair network in the first stage to help the model locate the repair region and coarse texture generation and a fine repair network in the second stage to generate fine iris textures (wherein the coarse network generates low-quality images and the fine network generates high-quality images).) and discriminator NNs that distinguish between the high-quality images and the low-quality images (Fig. 2. Page 15, Lines [36-37]-CHEN discloses the discriminator aims to correctly determine whether the input image is a real image or a generated image and finally achieve Nash equilibrium (wherein the real image is a low-quality image and the generated image is a high-quality image.), and Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of NEPOMNIACHTCHI of a method comprising: detecting that a first image of a document for a user that is submitted for a document verification process of the document has failed due to an image quality of at least one field in the first image; generating a repaired image of the first image that alters the at least one field in the first image that improves the image quality for a data extraction of corresponding document data on the document used for the document verification process; and outputting the repaired image with the first image in the document verification process, with the teachings of CHEN of determining that the first image is capable of being repaired using an image processing neural network (NN) framework comprising at least a generative artificial intelligence (AI) having generator NNs that generate high-quality images and low-quality images from real image training data and discriminator NNs that distinguish between the high-quality images and the low-quality images. Wherein having NEPOMNIACHTCHI’s method of document verification determining that the first image is capable of being repaired using an image processing neural network (NN) framework comprising at least a generative artificial intelligence (AI) having generator NNs that generate high-quality images and low-quality images from real image training data and discriminator NNs that distinguish between the high-quality images and the low-quality images. The motivation behind the modification would have been to obtain a document verification system that enhances the efficiency and accuracy of the verification process. Since both NEPOMNIACHTCHI and CHEN relate to processing images with a network, wherein NEPOMNIACHTCHI is it to improve the quality of the image and ensure that the content of the DL can be extracted, while CHEN it is to enrich iris texture information and improve iris recognition accuracy. Please see NEPOMNIACHTCHI et al. (US 20200311407 A1), Paragraph [0075], and CHEN et al. (DOI: 10.1016/j.isci.2023.107169), Page 1 [Lines 32-42]. NEPOMNIACHTCHI in view of CHEN fail to explicitly teach wherein a first pair of the generator NNs and discriminator NNs is trained using reverse training with the real image training data to generate the low-quality images from the high-quality images. However, COWAN explicitly teaches wherein a first pair of the generator NNs and discriminator NNs is trained using reverse training with the real image training data to generate the low-quality images from the high-quality images (Fig. 2. Paragraph [0064]-COWAN discloses the discriminator input data generator 240 can generate a first instance of the discriminator input data 245 by combining (e.g., by stacking) the high-resolution synthesized image 235 outputted by the generator neural network with the generator training input data (i.e., the low-resolution training image 210a and/or the reference training image 210b). The discriminator input data generator 240 can also generate a second instance of the discriminator input data 245 by combining (e.g., by stacking) the high-resolution training image 210c with the generator training input data (i.e., the low-resolution training image 210a and the reference training image 210b).); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of NEPOMNIACHTCHI in view of CHEN of a method comprising: detecting that a first image of a document for a user that is submitted for a document verification process of the document has failed due to an image quality of at least one field in the first image; generating a repaired image of the first image that alters the at least one field in the first image that improves the image quality for a data extraction of corresponding document data on the document used for the document verification process; and outputting the repaired image with the first image in the document verification process, with the teachings of COWAN of wherein a first pair of the generator NNs and discriminator NNs is trained using reverse training with the real image training data to generate the low-quality images from the high-quality images. Wherein having NEPOMNIACHTCHI’s method of document verification wherein a first pair of the generator NNs and discriminator NNs is trained using reverse training with the real image training data to generate the low-quality images from the high-quality images. The motivation behind the modification would have been to obtain a document verification system that enhances the efficiency and accuracy of the verification process. Since both NEPOMNIACHTCHI and COWAN relate to processing images received via a network, wherein NEPOMNIACHTCHI is it to improve the quality of the image and ensure that the content of the DL can be extracted, while COWAN it is for improving the performance and/or training efficacy of a GAN model. Please see NEPOMNIACHTCHI et al. (US 20200311407 A1), Paragraph [0075], and COWAN (US 20230010164 A1), Paragraph [0004]. Regarding claim 11, NEPOMNIACHTCHI in view of CHEN and further in view of COWAN explicitly teach the method of claim 10, NEPOMNIACHTCHI further explicitly teaches wherein the outputting the repaired image with the first image comprises retrying a verification of the document using the repaired image with the first image in the document verification process (Fig. 25. Paragraph [0268]-NEPOMNIACHTCHI discloses if a mobile image fails a critical test, the MDIPE 2100 rejects the image and can provide detailed information to the mobile device user explaining why the image was not of a high enough quality for the mobile application and that provides guidance for retaking the image to correct the defects that caused the mobile document image to fail the test, in the event that the defect can be corrected by retaking the image.), and wherein the retrying the verification includes identifying the repaired image for review during the document verification process of the at least one field altered to improve the image quality (Fig. 29A-B. Paragraph [0296]- NEPOMNIACHTCHI discloses an Image Focus IQA Test can be executed on a mobile image to determine whether the image is too blurry to be used by a mobile application. Blurry images are often unusable, and this test can help to identify such out-of-focus images and reject them. The user can be provided detailed information to assist the user in taking a better quality image of the document. For example, the blurriness may have been the result of motion blur caused by the user moving the camera while taking the image. The test result messages can suggest that the user hold the camera steadier when retaking the image.). Regarding claim 12, NEPOMNIACHTCHI in view of CHEN and further in view of COWAN explicitly teach the method of claim 10. NEPOMNIACHTCHI further explicitly teaches wherein the outputting the repaired image comprises outputting the repaired image in a user interface of a document verification process to the user with a request for a confirmation of the at least one field altered in the repaired image (Fig. 29A-B. Paragraph [0296]- NEPOMNIACHTCHI discloses an Image Focus IQA Test can be executed on a mobile image to determine whether the image is too blurry to be used by a mobile application. Blurry images are often unusable, and this test can help to identify such out-of-focus images and reject them. The user can be provided detailed information to assist the user in taking a better quality image of the document. For example, the blurriness may have been the result of motion blur caused by the user moving the camera while taking the image. The test result messages can suggest that the user hold the camera steadier when retaking the image.). Regarding claim 14, NEPOMNIACHTCHI in view of CHEN and further in view of COWAN explicitly teach the method of claim 10, NEPOMNIACHTCHI in view of CHEN teach fail to explicitly teach prior to the detecting, the method further comprises: training the generator NNs and the discriminator NNs using training data comprising an image data set having image pairs of the high-quality images with the low-quality images. However, COWAN explicitly teaches wherein, prior to the detecting, the method further comprises (Fig. 2. Paragraph [0023]-COWAN discloses the trained discriminator neural network can be more effective in determining whether an input signal to the discriminator neural network (e.g., a high-resolution image) is a reasonable processed version (e.g., with resolution upscaling) of another input signal (e.g., a low-resolution input image).): training the generator NNs and the discriminator NNs using training data comprising an image data set having image pairs of the high-quality images with the low-quality images (Fig. 1. Paragraph [0028]-COWAN discloses the system 120 can obtain a plurality of training examples 110, and processes the training examples 110 using a training engine 122 of the system to update network parameters 124 of a machine-learning model 121. Each training example can include a low-resolution training image 110a of an area, a reference training image 110b of the same area, and a high-resolution training image 110c of the same area.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of NEPOMNIACHTCHI in view of CHEN of a method comprising: detecting that a first image of a document for a user that is submitted for a document verification process of the document has failed due to an image quality of at least one field in the first image; generating a repaired image of the first image that alters the at least one field in the first image that improves the image quality for a data extraction of corresponding document data on the document used for the document verification process; and outputting the repaired image with the first image in the document verification process, with the teachings of COWAN of wherein, prior to the detecting, the method further comprises: training the generator NNs and the discriminator NNs using training data comprising an image data set having image pairs of the high-quality images with the low-quality images. Wherein having NEPOMNIACHTCHI’s method of document verification wherein, prior to the detecting, the method further comprises: training the generator NNs and the discriminator NNs using training data comprising an image data set having image pairs of the high-quality images with the low-quality images. The motivation behind the modification would have been to obtain a document verification system that enhances the efficiency and accuracy of the verification process. Since both NEPOMNIACHTCHI and COWAN relate to processing images received via a network, wherein NEPOMNIACHTCHI is it to improve the quality of the image and ensure that the content of the DL can be extracted, while COWAN it is for improving the performance and/or training efficacy of a GAN model. Please see NEPOMNIACHTCHI et al. (US 20200311407 A1), Paragraph [0075], and COWAN (US 20230010164 A1), Paragraph [0004]. Regarding claim 15, NEPOMNIACHTCHI in view of CHEN and further in view of COWAN explicitly teach the method of claim 14, NEPOMNIACHTCHI in view of CHEN fails to explicitly teach wherein, prior to the detecting, the method further comprises: performing a model finetuning of the generative AI model using a training cycle of the generator NNs and the discriminator NNs with the training data. However, COWAN explicitly teaches wherein, prior to the detecting, the method further comprises (Fig. 2. Paragraph [0023]-COWAN discloses the trained discriminator neural network can be more effective in determining whether an input signal to the discriminator neural network (e.g., a high-resolution image) is a reasonable processed version (e.g., with resolution upscaling) of another input signal (e.g., a low-resolution input image).): performing a model finetuning of the generative AI model using a training cycle of the generator NNs and the discriminator NNs with the training data (Fig. 2, illustrates a training cycle of the generator NN and discriminator NNs with training examples as inputs. Paragraph [0059]-COWAN discloses FIG. 2 shows a training process to learn network parameters of the GAN model.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of NEPOMNIACHTCHI in view of CHEN of a method comprising: detecting that a first image of a document for a user that is submitted for a document verification process of the document has failed due to an image quality of at least one field in the first image; generating a repaired image of the first image that alters the at least one field in the first image that improves the image quality for a data extraction of corresponding document data on the document used for the document verification process; and outputting the repaired image with the first image in the document verification process, with the teachings of COWAN of wherein, prior to the detecting, the method further comprises: performing a model finetuning of the generative AI model using a training cycle of the generator NNs and the discriminator NNs with the training data. Wherein having NEPOMNIACHTCHI’s method of document verification wherein, prior to the detecting, the method further comprises: performing a model finetuning of the generative AI model using a training cycle of the generator NNs and the discriminator NNs with the training data. The motivation behind the modification would have been to obtain a document verification system that enhances the efficiency and accuracy of the verification process. Since both NEPOMNIACHTCHI and COWAN relate to processing images received via a network, wherein NEPOMNIACHTCHI is it to improve the quality of the image and ensure that the content of the DL can be extracted, while COWAN it is for improving the performance and/or training efficacy of a GAN model. Please see NEPOMNIACHTCHI et al. (US 20200311407 A1), Paragraph [0075], and COWAN (US 20230010164 A1), Paragraph [0004]. Regarding claim 17, NEPOMNIACHTCHI in view of CHEN and further in view of COWAN explicitly teach the method of claim 10, NEPOMNIACHTCHI further explicitly teaches wherein the detecting comprises: identifying that the first image is at or below a threshold image quality to qualify as a low-quality image based at least on the image quality of the at least one field (Fig. 29A-B. Paragraph [0296]- NEPOMNIACHTCHI discloses an Image Focus IQA Test can be executed on a mobile image to determine whether the image is too blurry to be used by a mobile application. Blurry images are often unusable, and this test can help to identify such out-of-focus images and reject them.). Regarding claim 18, NEPOMNIACHTCHI in view of CHEN and further in view of COWAN explicitly teach the method of claim 10, NEPOMNIACHTCHI further explicitly teaches wherein the document comprises a user identity document (Fig. 4A-4D, illustrate a drivers license, a user identity document. Paragraph [0096]-NEPOMNIACHTCHI discloses FIG. 4A illustrates an image 400 of a DL 402 captured by a mobile device before the image 400 has been cropped), and wherein the at least one field comprises one of a user image or user information present on the document and user to verify an identity of the user (Fig. 4A-4D, illustrate a drivers license (DL) with a user image and information on the DL pertaining to the user. Paragraph [0096]-NEPOMNIACHTCHI discloses FIG. 4A illustrates an image 400 of a DL 402 captured by a mobile device before the image 400 has been cropped.). Regarding claim 19, NEPOMNIACHTCHI explicitly teaches a non-transitory machine-readable medium having stored thereon machine-readable instructions executable to cause a machine to perform operations comprising (Fig. 50. Paragraph [0416]-NEPOMNIACHTCHI the processor 4410 can be a microprocessor or the like that is configurable to execute program instructions stored in the memory 4420 and/or the data storage 4440. The memory 4420 is a computer-readable memory that can be used to store data and or computer program instructions that can be executed by the processor 4410.): receiving a document image for a verification (Fig. 1. Paragraph [0078]-NEPOMNIACHTCHI discloses FIG. 1 illustrates one embodiment of a system 100 for capturing an image of a driver's license (DL) with a mobile device and processing the image to extract content. Further in Paragraph [0083]-NEPOMNIACHTCHI discloses the third party server 110 can be configured to utilize content from a DL, for example, by taking the information extracted from the images and matching it with information obtained about the person or submitting the information as verification of the person's identity.); wherein the repaired document image enables the data extraction of the corresponding document data in the at least one portion (Fig. 9. Paragraph [0128]-NEPOMNIACHTCHI discloses once the image is captured and corrected, and the data is extracted and adjusted, then the image, data, and any required credential information, such as username, password, and phone or device identifier, can be transmitted to the remote server for further processing.); and outputting, for the verification, the repaired document image with the document image (Fig. 24. Paragraph [0248]-NEPOMNIACHTCHI discloses after receiving a bi-tonal image containing a remittance coupon that is orientated right-side-up at operation 1802, the codeline at the bottom of the remittance coupon is read at operation 1804. Further in Paragraph [0259]-NEPOMNIACHTCHI discloses operation 1812 outputs the resulting bi-tonal image of the remittance coupon and gray-scale image of the remittance coupon (wherein the gray-scale image is the repaired image).). NEPOMNIACHTCHI fails to explicitly teach processing the document image using a neural network (NN) framework for document image repair, wherein the generator NN component and the discriminator NN component comprises at least two generator-discriminator pairs having one of the at least two trained using reverse training with the real image training data of the first quality and the second quality; repairing the document image using the NN framework that improves an image quality of at least one portion of the document image for a data extraction of corresponding document data in the document image used for the verification. However, CHEN explicitly teaches processing the document image using a neural network (NN) framework for document image repair (Fig. 2. Page 2, Lines [25-27]-CHEN discloses a two-stage deep residual attention-GAN (TSDRA-GAN), with a coarse repair network in the first stage to help the model locate the repair region and coarse texture generation and a fine repair network in the second stage to generate fine iris textures.), wherein the generator NN component and the discriminator NN component comprises at least two generator-discriminator pairs having one of the at least two trained using reverse training with the real image training data of the first quality and the second quality (Fig. 2. Page 14 Lines [30-32]-CHEN discloses TSDRA-GAN consists of a coarse network, a fine network, and a discriminator, where the coarse network is responsible for the coarse repair of iris texture and performs contour repair and coarse repair of iris texture on the original image. Further Page 15 Lines [1-3]-CHEN discloses the fine network performs texture refinement restoration on the input, and the generated image in this stage is input to the discriminator together with GT to calculate the WGAN-GP loss.); repairing the document image using the NN framework that improves an image quality of at least one portion of the document image for a data extraction of corresponding document data in the document image used for the verification (Fig. 2. Page 1, Lines [32-34]-CHEN discloses a deep-learning-based and end-to-end method of repairing obscured iris textures to enrich iris texture information and improve iris recognition accuracy is proposed in this paper, structure diagram of TSDRA-GAN is shown in Figure 2.), and Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of NEPOMNIACHTCHI of a non-transitory machine-readable medium having stored thereon machine-readable instructions executable to cause a machine to perform operations comprising: receiving a document image for a verification; wherein the repaired document image enables the data extraction of the corresponding document data in the at least one portion; and outputting, for the verification, the repaired document image with the document image, with the teachings of CHEN of processing the document image using a neural network (NN) framework for document image repair, wherein the generator NN component and the discriminator NN component comprises at least two generator-discriminator pairs having one of the at least two trained using reverse training with the real image training data of the first quality and the second quality; repairing the document image using the NN framework that improves an image quality of at least one portion of the document image for a data extraction of corresponding document data in the document image used for the verification. Wherein having NEPOMNIACHTCHI’s document verification process processing the document image using a neural network (NN) framework for document image repair, wherein the generator NN component and the discriminator NN component comprises at least two generator-discriminator pairs having one of the at least two trained using reverse training with the real image training data of the first quality and the second quality; repairing the document image using the NN framework that improves an image quality of at least one portion of the document image for a data extraction of corresponding document data in the document image used for the verification. The motivation behind the modification would have been to obtain a document verification system that enhances the efficiency and accuracy of the verification process. Since both NEPOMNIACHTCHI and CHEN relate to processing images with a network, wherein NEPOMNIACHTCHI is it to improve the quality of the image and ensure that the content of the DL can be extracted, while CHEN it is to enrich iris texture information and improve iris recognition accuracy. Please see NEPOMNIACHTCHI et al. (US 20200311407 A1), Paragraph [0075], and CHEN et al. (DOI: 10.1016/j.isci.2023.107169), Page 1 [Lines 32-42]. NEPOMNIACHTCHI in view of CHEN fails to explicitly teach wherein the NN framework comprises a generative artificial intelligence (AI) having a generator NN component that generates first images of a first quality and second images of a second quality that is lower than the first quality of the first images from real image training data and a discriminator NN component that distinguishes between the first images and the second images. However, COWAN explicitly teaches wherein the NN framework comprises a generative artificial intelligence (AI) having a generator NN component that generates first images of a first quality and second images of a second quality that is lower than the first quality of the first images from real image training data (Fig. 2. Paragraph [0064]-COWAN discloses the discriminator input data generator 240 can generate a first instance of the discriminator input data 245 by combining (e.g., by stacking) the high-resolution synthesized image 235 outputted by the generator neural network with the generator training input data (i.e., the low-resolution training image 210a and/or the reference training image 210b). The discriminator input data generator 240 can also generate a second instance of the discriminator input data 245 by combining (e.g., by stacking) the high-resolution training image 210c with the generator training input data (i.e., the low-resolution training image 210a and the reference training image 210b) (wherein an image synthesized with a low-resolution training image is a low-quality image).) and a discriminator NN component that distinguishes between the first images and the second images (Fig. 2. Paragraph [0065]-COWAN discloses the training engine uses the discriminator neural network 250 to process the first and the second instances of the discriminator input data 245, respectively, and generates a prediction 255 to distinguish between the high-resolution synthesized image 235 and the high-resolution training image 210 (the “real” image) included in the discriminator input data 245 (wherein an image synthesized with a low-resolution training image is a low-quality image).), and Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of NEPOMNIACHTCHI in view of CHEN of a non-transitory machine-readable medium having stored thereon machine-readable instructions executable to cause a machine to perform operations comprising: receiving a document image for a verification; wherein the repaired document image enables the data extraction of the corresponding document data in the at least one portion; and outputting, for the verification, the repaired document image with the document image, with the teachings of COWAN of teach wherein the NN framework comprises a generative artificial intelligence (AI) having a generator NN component that generates first images of a first quality and second images of a second quality that is lower than the first quality of the first images from real image training data and a discriminator NN component that distinguishes between the first images and the second images. Wherein having NEPOMNIACHTCHI’s document verification process wherein the NN framework comprises a generative artificial intelligence (AI) having a generator NN component that generates first images of a first quality and second images of a second quality that is lower than the first quality of the first images from real image training data and a discriminator NN component that distinguishes between the first images and the second images. The motivation behind the modification would have been to obtain a document verification system that enhances the efficiency and accuracy of the verification process. Since both NEPOMNIACHTCHI and COWAN relate to processing images received via a network, wherein NEPOMNIACHTCHI is it to improve the quality of the image and ensure that the content of the DL can be extracted, while COWAN it is for improving the performance and/or training efficacy of a GAN model. Please see NEPOMNIACHTCHI et al. (US 20200311407 A1), Paragraph [0075], and COWAN (US 20230010164 A1), Paragraph [0004]. Regarding claim 20, NEPOMNIACHTCHI in view of CHEN and further in view of COWAN explicitly teach the non-transitory machine-readable medium of claim 19, NEPOMNIACHTCHI in view of CHEN fail to explicitly teach wherein the reverse training comprises generating one or more of the first images of the first quality that is higher than the second quality by introducing a synthetic image effect to the first images. However, COWAN explicitly teaches wherein the reverse training comprises generating one or more of the first images of the first quality that is higher than the second quality by introducing a synthetic image effect to the first images (Fig. 2. Paragraph [0064]-COWAN discloses the discriminator input data generator 240 can generate a first instance of the discriminator input data 245 by combining (e.g., by stacking) the high-resolution synthesized image 235 outputted by the generator neural network with the generator training input data (i.e., the low-resolution training image 210a and/or the reference training image 210b). The discriminator input data generator 240 can also generate a second instance of the discriminator input data 245 by combining (e.g., by stacking) the high-resolution training image 210c with the generator training input data (i.e., the low-resolution training image 210a and the reference training image 210b) (wherein an image synthesized with a low-resolution training image introduces a synthetic image effect).). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of NEPOMNIACHTCHI in view of CHEN of a non-transitory machine-readable medium having stored thereon machine-readable instructions executable to cause a machine to perform operations comprising: receiving a document image for a verification; wherein the repaired document image enables the data extraction of the corresponding document data in the at least one portion; and outputting, for the verification, the repaired document image with the document image, with the teachings of COWAN of teach wherein the reverse training comprises generating one or more of the first images of the first quality that is higher than the second quality by introducing a synthetic image effect to the first images. Wherein having NEPOMNIACHTCHI’s document verification process wherein the reverse training comprises generating one or more of the first images of the first quality that is higher than the second quality by introducing a synthetic image effect to the first images. The motivation behind the modification would have been to obtain a document verification system that enhances the efficiency and accuracy of the verification process. Since both NEPOMNIACHTCHI and COWAN relate to processing images received via a network, wherein NEPOMNIACHTCHI is it to improve the quality of the image and ensure that the content of the DL can be extracted, while COWAN it is for improving the performance and/or training efficacy of a GAN model. Please see NEPOMNIACHTCHI et al. (US 20200311407 A1), Paragraph [0075], and COWAN (US 20230010164 A1), Paragraph [0004]. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over NEPOMNIACHTCHI et al. (US 20200311407 A1), hereinafter referenced as NEPOMNIACHTCHI, in view of CHEN et al. (DOI: 10.1016/j.isci.2023.107169), hereinafter referenced as CHEN, and further in view of COWAN (US 20230010164 A1), hereinafter referenced as COWAN, and further in view of DUTTA et al. (US 20230296516 A1), hereinafter referenced as DUTTA. Regarding claim 16, NEPOMNIACHTCHI in view of CHEN and further in view of COWAN explicitly teach the method of claim 10, NEPOMNIACHTCHI in view of CHEN and further in view of COWAN fail to explicitly teach wherein the generator NNs and the discriminator NNs comprise two pairs of NNs each having one generator NN and one discriminator NN. However, DUTTA explicitly teaches wherein the generator NNs and the discriminator NNs comprise two pairs of NNs each having one generator NN and one discriminator NN (Fig. 6. Paragraph [0143]-DUTTA discloses the training context NN section comprises a training CycleGAN having two pairs of coupled generator and discriminator elements (632A and 632B and 634A and 634B).). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of NEPOMNIACHTCHI in view of CHEN and further in view of COWAN of a method comprising: detecting that a first image of a document for a user that is submitted for a document verification process of the document has failed due to an image quality of at least one field in the first image; generating a repaired image of the first image that alters the at least one field in the first image that improves the image quality for a data extraction of corresponding document data on the document used for the document verification process; and outputting the repaired image with the first image in the document verification process, with the teachings of DUTTA of wherein, prior to the detecting, the method further comprises: performing a model finetuning of the generative AI model using a training cycle of the generator NNs and the discriminator NNs with the training data. Wherein having NEPOMNIACHTCHI’s method of document verification wherein the generator NNs and the discriminator NNs comprise two pairs of NNs each having one generator NN and one discriminator NN. The motivation behind the modification would have been to obtain a document verification system that enhances the efficiency and accuracy of the verification process. Since both NEPOMNIACHTCHI and DUTTA relate to processing images received via a network, wherein NEPOMNIACHTCHI is it to improve the quality of the image and ensure that the content of the DL can be extracted, while DUTTA is improving performance, improving accuracy, and/or reducing cost. Please see NEPOMNIACHTCHI et al. (US 20200311407 A1), Paragraph [0075], and DUTTA et al. (US 20230296516 A1), Paragraph [0039]. Allowable Subject Matter Claims 3 and 13, along with their dependent claims, 4 are therefrom objected to as being dependent upon rejected base claims, claim 1 and claim 10, respectively, but would be allowable if rewritten in independent form including all of the limitations of base claims and any intervening claims, once the claim objections along with the double patenting rejection are overcome. The following is a statement of reasons for indication of allowable subject matter: Regarding claim 3, the prior arts fail to explicitly teach the system of claim 2, wherein the first generator generates new high-quality images from the low-quality images and the first discriminator provides image quality feedback identifying an image quality of each of the new high-quality images to the first generator, wherein the second generator generates new low-quality images from the high-quality images and the second discriminator provides the image quality feedback identifying the image quality of each of the new low-quality images to the second generator, and wherein the second generator generates the new low-quality images using higher order augmentation, as claimed in claim 3. Regarding claim 13, the prior arts fail to explicitly teach the method of claim 10, wherein the reverse training includes generating the low-quality images from the high-quality images using the one of the generator NNs for the first pair and higher order augmentation for a sequence of image operations applied to the high-quality images, and wherein the reverse training further includes training a corresponding one of the discriminator NNs for the first pair using the generated low-quality images with corresponding ones of the high-quality images, as claimed in claim 13. Conclusion Listed below are prior arts made of record and not relied upon but are considered pertinent to applicant’s disclosure. WANG et al. (US 12367669 B2) - Techniques are disclosed relating to automatically determining image quality for images of documents. In some embodiments, a computer system receives an image of a document captured at a user computing device. Using a neural network, the computer system analyzes the image to determine whether the image satisfies a quality threshold, where the analyzing includes determining whether one or more features in the image used in an authentication process are obscured. The computer system transmits, to the user computing device, a quality result, where the quality result is generated based on an image classification output by the neural network. Automatically determining whether a received image of a document satisfies a quality threshold may advantageously improve the chances of a system being able to complete an authentication process quickly, which in turn may improve user experience while reducing fraudulent activity. HWANG (US 20230196536 A1) - Disclosed herein is a method for improving the quality and realism of a rendered image. The method includes receiving training data including a real image and a rendered image, generating a low-quality image using the training data, generating a high-quality image using the low-quality image, generating a realistic image using the high-quality image, and training a neural network using an error calculated based on the high-quality image and the realistic image. TANG et al. (US 20210350516 A1) - Techniques are disclosed relating to determining whether document objects included in an image correspond to known document types. In some embodiments, a computing system maintains information specifying a set of known document types. In some embodiments, the computing system receives an image that includes objects. In some embodiments, the computing system analyzes, using a first neural network, the image to identify a document object and location information specifying a location of the document object within the image. In some embodiments, the computing system determines, using a second neural network, whether the document object within the image corresponds to a document type specified in the set of known document types, where the determining is performed based on the location information of the document object. In some embodiments, disclosed techniques may assist in automatically extracting information from documents, which in turn may advantageously decrease processing time for onboarding new customers. BAI (US 20210065337 A1) - The disclosure provides methods and image processing devices for image super resolution, image enhancement, and convolutional neural network (CNN) model training. The method for image super resolution includes the following steps. An original image is received, and a feature map is extracted from the original image. The original image is segmented into original patches. Each of the original patches is classified respectively into one of patch clusters according to the feature map. The original patches are processed respectively by different pre-trained CNN models according to the belonging patch clusters to obtain predicted patches. A predicted image is generated based on the predicted patches. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ETHAN N WOLFSON whose telephone number is (571)272-1898. The examiner can normally be reached Monday - Friday 8:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ETHAN N WOLFSON/Examiner, Art Unit 2673 /CHINEYERE WILLS-BURNS/ Supervisory Patent Examiner, Art Unit 2673
Read full office action

Prosecution Timeline

Feb 29, 2024
Application Filed
Feb 05, 2026
Non-Final Rejection — §103, §DP
Apr 12, 2026
Interview Requested

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month