Prosecution Insights
Last updated: April 19, 2026
Application No. 18/687,384

METHOD FOR DETECTING A FORGERY OF AN IDENTITY DOCUMENT

Non-Final OA §101§103
Filed
Feb 28, 2024
Examiner
ROSARIO, DENNIS
Art Unit
2676
Tech Center
2600 — Communications
Assignee
Thales Dis France SAS
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
3y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
385 granted / 557 resolved
+7.1% vs TC avg
Strong +29% interview lift
Without
With
+28.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
34 currently pending
Career history
591
Total Applications
across all art units

Statute-Specific Performance

§101
16.5%
-23.5% vs TC avg
§103
40.3%
+0.3% vs TC avg
§102
24.6%
-15.4% vs TC avg
§112
13.6%
-26.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 557 resolved cases

Office Action

§101 §103
DETAILED ACTION Response to Amendment The preliminary amendment was received 02/28/2024, claims 1-13 pending: PNG media_image1.png 982 271 media_image1.png Greyscale Claims 1-13 objected to because of the following informalities: Claim 12 rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claims 1-7,9-12 and 13 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim(s) 1,2,3,9,10,11,12 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huber, JR. et al. (US 2018/0107887 A1) in view of Wang (US 2020/0184200 A1) and Croxford (US 2020/0184319 A1): Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huber, JR. et al. (US 2018/0107887 A1) in view of Wang (US 2020/0184200 A1) and Croxford (US 2020/0184319 A1) as applied in claims 1,2,3,9,10,11,12 and 13 further in view of Velde et al.: コーン・ヴァン・デ・ベルデ パウル・スエテンス (Google Translate: Koen Van de Velde, Paul Suetens ) (JP 4968981 B2) with SEARCH machine translation: Claim(s) 5,6,7,8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huber, JR. et al. (US 2018/0107887 A1) in view of Wang (US 2020/0184200 A1) and Croxford (US 2020/0184319 A1) as applied in claims 1,2,3,9,10,11,12 and 13 further in view of Velde et al.: コーン・ヴァン・デ・ベルデ パウル・スエテンス (Google Translate: Koen Van de Velde, Paul Suetens ) (JP 4968981 B2) with SEARCH machine translation as applied in claim 4 further in view of MASSEN ROBERT (DE 40 35 368 A1) with SEARCH machine translation: Claim Objections Claims 1-13 objected to because of the following informalities: Claim 1 has symbols “(D1)” (at line 4) “(D2)” (at line 6) “(D3)” (at line 8) that are not defined in the claim. Thus claims 1-12 are objected for depending on claim 1. Claim 6 has the symbols “(T1)” (at line 3) “(T2)” (at line 14) that are not defined in the claim. Claim 13 is object similar to claim 1. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 12 rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because “The method” of claim 12 is equal to the claimed “computer program product” in the atmosphere1 including non-statutory transitory carrier-waves to one of ordinary skill in the art of products of nature under the broadest reasonable interpretation of claim 12 in view of applicant’s disclosure (page 5,ll.15-20): 12. The method of claim 1 is performed by2 a [[A]] computer program product3 directly4 loadable into the memory of at least one computer, comprising software code instructions for performing of the method, when said computer program product is run on the at least one computer. 12. (Suggested) The method of claim 1 is performed by5 a [[A]] non-transitory computer program product6 directly7 loadable into the memory of at least one computer, comprising software code instructions for performing of the method, when said computer program product is run on the at least one computer. Claims 1-7,9-12 and 13 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 0: establish broadest reasonable interpretation (in footnotes) Step 1: claim 1 is a process; claim 13 a machine Step 2A, prong 1: The claim(s) recite(s) an abstract idea and math: a) selecting (D1) regions of interest … b) creating (D2) a composite image …using a predetermined process… c) generating (D3) a filtered image … detecting (D4) a forgery … producing a set of parameters: 1. (Currently Amended) A method for detecting a forgery of an identity document [[(100)]] including a visual security element [[(101)]], said method being performed by a processor [[(201)]] of a security device [[(200)]] and comprising: a) selecting (D1) regions of interest of a captured image of at least part of said identity document, said regions of interest comprising at least one edge of said security element, b) creating (D2) a composite image from said selected regions of interest using a predetermined process of image combination, c) generating (D3) a filtered image by applying to said composite image, a digital image filtering revealing geometrical objects created by a replacement or a displacement of said security element in said identity document, d) from said filtered image, using a dedicated neural network, detecting (D4) a forgery of the identity document based on the geometrical objects revealed by said digital image filtering, said dedicated neural network having been trained with a training data set such that an input to the dedicated neural network is a filtered image for a given identity document and an output of the dedicated neural network is an indicator of the forgery or not of said given identity document, wherein said output is based on geometrical objects in said input filtered image created by a replacement or a displacement of a security element in said given identity document and revealed by said digital image filtering, thereby producing a set of parameters for the dedicated neural network with which the processor of the security device has been programmed. Step 2A, prong 1: This judicial exception is not integrated into a practical application because the additional elements: of a captured image of at least part of said identity document, said regions of interest comprising at least one edge of said security element, b) …from said selected regions of interest … of image combination, c) … by applying to said composite image, a digital image filtering revealing geometrical objects created by a replacement or a displacement of said security element in said identity document, d) from said filtered image, using a dedicated neural network,… of the identity document based on the geometrical objects revealed by said digital image filtering, said dedicated neural network having been trained with a training data set such that an input to the dedicated neural network is a filtered image for a given identity document and an output of the dedicated neural network is an indicator of the forgery or not of said given identity document, wherein said output is based on geometrical objects in said input filtered image created by a replacement or a displacement of a security element in said given identity document and revealed by said digital image filtering, thereby … for the dedicated neural network with which the processor of the security device has been programmed. do not improve the function of a computer in view of applicant’s disclosure: PNG media_image2.png 504 788 media_image2.png Greyscale However claim 8 reflects this improvement in pg. 5 & is not rejected under 35 USC 101. Step 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because each additional element (in Step 2A, prong 2: “identity document” “security element” “filtering” “replacement” “neural network having been trained with a training data set” “processor” ) considered individually or with the mental process & math adheres to conventional practices as indicated in applicant’s specification’s background8 and applicant’s specification’s page 13: PNG media_image3.png 1070 735 media_image3.png Greyscale PNG media_image4.png 1116 836 media_image4.png Greyscale PNG media_image5.png 878 747 media_image5.png Greyscale PNG media_image6.png 1463 1061 media_image6.png Greyscale PNG media_image7.png 1470 1037 media_image7.png Greyscale Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1,2,3,9,10,11,12 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huber, JR. et al. (US 2018/0107887 A1) in view of Wang (US 2020/0184200 A1) and Croxford (US 2020/0184319 A1): PNG media_image8.png 982 526 media_image8.png Greyscale Re 1. (Currently Amended), Huber teaches A method for detecting a forgery of an identity document [[(100)]] including a visual security element (or “features”9 [0040] 2nd S) [[(101)]], said method being performed by a processor (fig. 8:808: “Processor”) [[(201)]] of a security device [[(200)]] and comprising : a) selecting (D1) regions of interest (via “one or10 more specific regions of interest-e.g., high value identification regions of the identification documents” [0004]: fig. 6A:602: high value photo region of interest) of a captured image of at least part of said identity document, said regions of interest comprising at least one (“jagged” [0057] 9th S) edge of said security element, b) creating (D2) a composite image (resulting in a “splice” “photo” “image” [0008] penult S: fig. 6B:602’: a new high value identification region) from said selected regions of interest using a predetermined process of image combination (resulting in a “splice” “photo” “image” [0008] penult S: fig. 6B:602’: a new high value identification region), c) generating (D3) a filtered image (comprising “filter” “extracted pixel features” [0038] last S: fig. 1:108: “Feature Extraction Engine”) by applying to said composite image, a digital image filtering (comprising by digital image “pixel…filters” [0038] last S, wherein “ the digital image including an array of pixels” [0010]) revealing (via “boost dark areas and soften light ones” [0042] 2nd to last S: fig. 10B:1016: “Layer 7 Final Compressed Image”) geometrical objects (said geometric “areas”11 [0042] 2nd to last S: fig. 10B:1016: “Layer 7 Final Compressed Image”: fig. 4: “Augmentation Geometric”) created by a replacement (via “replacing…photos” [0007] last S) or a displacement of said security element in said identity document, d) from said filtered image (resulting in “filter”-“extracted pixel features” [0038] last S), using a dedicated (“image classifier” [0035] last S) neural network (fig. 1: 106: “Image Classifier”), detecting (via a “tamper detector” [0033] 2nd : fig. 8:814: “Tampering Detection System”) (D4) a forgery (via “the forger may utilize special software in an attempt to meticulously recreate backgrounds, security features, and the like. As yet another example, the forger may attempt to homogenize the modified portions of the image by taking a new live photo of a printout or screenshot of the splice or tamper.” [0008] penult S) of the identity document based on the geometrical objects (said geometric “areas”12 [0042] 2nd to last S: fig. 10B:1016: “Layer 7 Final Compressed Image”: fig. 4: “Augmentation Geometric”) revealed (via “boost dark areas and soften light ones” [0042] 2nd to last S: fig. 10B:1016: “Layer 7 Final Compressed Image”) by said digital image filtering (comprising by digital image “pixel…filters” [0038] last S, wherein “the digital image including an array of pixels” [0010]), said dedicated neural network (“image classifier” [0035] last S) having been trained (resulting in “a trained image classifier” [0034] 1st S) with a training data set (or “training dataset” [0054] last S) such that an input (or “each input” element, [0044] 3rd S) to the dedicated neural network (“image classifier” [0035] last S) is13 a filtered image (said geometric “areas”14 [0042] 2nd to last S: fig. 10B:1016: “Layer 7 Final Compressed Image”: fig. 4: “Augmentation Geometric”) for a given identity document and an output (fig. 9:906: “Provide the Tamper Indication Output”) of the dedicated neural network (“image classifier” [0035] last S) is an indicator of the forgery or not of said given identity document, wherein said output (fig. 9:906: “Provide the Tamper Indication Output”) is based on geometrical objects in said input filtered image created by a replacement (via “replacing…photos” [0007] last S) or a displacement of a security element in said given identity document and revealed by said digital image filtering, thereby producing a set of parameters for the dedicated neural network (“image classifier” [0035] last S) with which the processor (fig. 8:808: “Processor”) of the security device (or “security”-“tamper detector” [0033] penult S) has been programmed (“in accordance with one or more embodiments of this disclosure” [0026] penult S: fig. 2). Huber does not teach the difference of claim 1 of: --d) … dedicated (neural network)15 …said dedicated (neural network) … the dedicated (neural network) … the dedicated (neural network)…producing a set of parameters for the dedicated (neural network)--. Wang teaches the difference of claim 1 of: --d) … dedicated (neural network)16 (“computation units” [0038] 4th S: fig. 1:19).…said dedicated (neural network) (“computation units” [0038] 4th S: fig. 1:19). … the dedicated (neural network) (“computation units” [0038] 4th S: fig. 1:19). … the dedicated (neural network) (“computation units” [0038] 4th S: fig. 1:19).… producing17 a (“small” [0036] penult S) set of parameters for the dedicated (neural network) (“computation units” [0038] 4th S: fig. 1:19).--. Since Huber teaches biometrics, one of skill in the art of biometrics can make Huber’s be as Wang’s predictably recognizing in the change “a scalable parallel classification architecture can be efficiently implemented in hardware by replicating the same hardware block 19 that can be implement as an integrated circuit block with dedicated neural network computation units. In such design, a large computation resource that is typical required for a software implementation of a very deep neural network for a large group classification is not required since the shallower neural network used in the object discriminator design is easier to train to recognize one person.” Wang [0038] 4th S: PNG media_image9.png 1740 1109 media_image9.png Greyscale The Huber of combination (illustrated above) of Huber,Wang does not teach the remaining difference of claim 1 of “producing”18. Croxford teaches the remaining difference of claim 1 of: producing19 (via “generating20 a set of output parameters indicative of an output feature map representative of the neural network” [0100] penult S). Since Huber of combination (illustrated above) of Huber,Wang teaches a parameter, one of skill in the art of parameters can make Huber’s of combination (illustrated above) of Huber,Wang be as Croxford’s seeing the change “in a manner that may make more efficient use of a hardware component, such as NNA 200”, Croxford [0026] last S: Neural Network Accelerator: “dedicated neural network accelerator”, Croxford, [0027] 6th S, thus providing an efficient use of the combination’s neural network (Discriminator): PNG media_image10.png 2514 1108 media_image10.png Greyscale Re 2. (Original), Huber of the combination (illustrated above) of Huber,Wang, Croxford teaches The method of claim 1, wherein the security element is a biometric image (or a biometric ID photograph via “biometric” “identification data” wherein “The ‘identification data’ may include one or more of the following: an identification photograph” [0006] 6th & 7th Ss) , which is unique to a holder of the identity document. Re 3. (Currently Amended), Huber of the combination (illustrated above) of Huber,Wang,Croxford teaches The method of claim 1 [[or 2]], wherein the security element (or “features”21 [0040] 2nd S) is among a portrait (or “ID photo” [0006] penult S), a fingerprint (“fingerprints” [0006]), an Optically Variable Image Device (“ ‘OVD’ ” [0041] 2nd S) or a text element (or “text” “region”22 [0013] 1st S). Re 9. (Currently Amended), Huber of the combination (illustrated above) of Huber,Wang,Croxford teaches The method of [[any one of]] claim[[s]] 1 [[to 8]], wherein programming the processor of the security device comprises storing the set of parameters in a memory (illustrated above: Croxford: fig. 10:1040: “Memory”: “Parameter Structures/Storage”) connected to the processor. Re 10. (Currently Amended), Huber of the combination (illustrated above) of Huber,Wang,Croxford teaches The method of [[any one of]] claim[[s]] 1 [[to 9]], wherein said identity document is an identity card, a (“national” [0027]) passport, a (“state” [0026]) driver license, a resident card,a healthcare insurance card or a bank card. Re 11. (Currently Amended), Huber of the combination (illustrated above) of Huber,Wang,Croxford teaches The method of [[any one of]] claim[[s]] 1 [[to 10]], wherein said dedicated neural network is (illustrated above: “Discriminator”: detailed in Wang’s fig. 6:63: “conv1”) a convolutional neural network (via “the object discriminator 71 consists of … a first convolutional neutral network (CNN) conv1 63 to produce an output tensor 64.”, Wang” [0037] 2nd & 3rd Ss. Re 12. Huber of the combination (illustrated above) of Huber,Wang,Croxford teaches The method of claim 1 is performed by a [[A]] (“single”, Huber: [0078] last S) computer program product directly loadable into the memory of at least one computer, comprising software code instructions for performing of the method,when said computer program product is run on the at least one computer. Claim 13 is rejected like claim 1: Re 13. (Currently Amended ), Huber of the combination (illustrated above) of Huber,Wang,Croxford teaches A[[n]] security device [[(200)]] comprising: [[-]] a network interface (comprised by “communication network” “components” [0074] penult S: “a front-end component, e.g., a client computer having a graphical user interface” [0074] 1st S) [[(206)]] configured to acquire captured images of at least part of said identity document to be checked and to provide the captured images to a processor [[(201),]]; [[-]] at least one memory (“random access memory (RAM)” [0059] last S) [[(203, 204, 205),]].and [[-]] the processor [[(201)]] configured to execute below: a) selecting (D1) regions of interest of a captured image of at least part of said identity document, said regions of interest comprising at least one edge of said security element; b) creating (D2) a composite image from said selected regions of interest using a predetermined process of image combination; c) generating (D3) a filtered image by applying to said composite image, a digital image filtering revealing geometrical objects created by a replacement or a displacement of said security element in said identity document; and d) from said filtered image, using a dedicated neural network, detecting (D4) a forgery of the identity document based on the geometrical objects revealed by said digital image filtering: said dedicated neural network having been trained with a training data set such that an input to the dedicated neural network is a filtered image for a given identity document and an output of the dedicated neural network is an indicator of the forgery or not of said given identity document, wherein said output is based on geometrical objects in said input filtered image created by a replacement or a displacement of a security element in said given identity document and revealed by said digital image filtering, thereby producing a set of parameters for the dedicated neural network with which the processor of the security device has been programmed. Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huber, JR. et al. (US 2018/0107887 A1) in view of Wang (US 2020/0184200 A1) and Croxford (US 2020/0184319 A1) as applied in claims 1,2,3,9,10,11,12 and 13 further in view of Velde et al.: コーン・ヴァン・デ・ベルデ パウル・スエテンス (Google Translate: Koen Van de Velde, Paul Suetens ) (JP 4968981 B2) with SEARCH machine translation: PNG media_image11.png 982 588 media_image11.png Greyscale Re 4. (Currently Amended), Huber of the combination (illustrated above) of Huber,Wang,Croxford teaches The method of any one of claim[[s]] 1 [[to 3]], wherein said digital image filtering (comprising by digital image “pixel…filters” [0038] last S, wherein “the digital image including an array of pixels” [0010]) is23 (said geometric “areas”24 [0042] 2nd to last S: fig. 10B:1016: “Layer 7 Final Compressed Image”: fig. 4: “Augmentation Geometric”) based25 on262728 (A) color gradient and2930 (B) [[I]]Laplacian measures computation (via “one or more computers”31 [0046]: fig. 8:808: “Processor”). Huber of the combination (illustrated above) of Huber,Wang,Croxford does not teach the difference of claim 4 of: -- on3233 color gradient and3435 [[l]]Laplacian--. Velde teaches the Markush element [(A) and (B)] in the difference of claim 4 of: -- on3637 (A) color gradient (“at a particular pixel and a particular resolution level”, pg. 3 [0019])38 and3940 (B) [[l]]Laplacian (“(pyramid)”,pg. 5 [0042])41--. Since Huber of the combination (illustrated above) of Huber,Wang,Croxford teaches color enhancement, one of skill in color enhancements can make Huber’s enhancement of the combination (illustrated above) of Huber,Wang,Croxford be as Velde’s predictably recognizing the change “to enhance to a greater extent than the other parts”, Velde [0143]. Claim(s) 5,6,7,8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huber, JR. et al. (US 2018/0107887 A1) in view of Wang (US 2020/0184200 A1) and Croxford (US 2020/0184319 A1) as applied in claims 1,2,3,9,10,11,12 and 13 further in view of Velde et al.: コーン・ヴァン・デ・ベルデ パウル・スエテンス (Google Translate: Koen Van de Velde, Paul Suetens ) (JP 4968981 B2) with SEARCH machine translation as applied in claim 4 further in view of MASSEN ROBERT (DE 40 35 368 A1) with SEARCH machine translation: PNG media_image12.png 982 879 media_image12.png Greyscale Re 5. (Currently Amended), Huber of the combination (illustrated above) of Huber,Wang,Croxford,Velde teaches The method of [[any one of]] claim[[s 1 to]] 4, wherein said digital image filtering (comprising by digital image “pixel…filters” [0038] last S, wherein “the digital image including an array of pixels” [0010]) is based42 on (A) Mexican hat filtering, (B) Canny edge detection, (C) blurriness enhancement or reduction, (D) convolution matrix filtering, (E) Histogram of oriented gradients or (F) Hough line transform. Huber of the combination (illustrated above) of Huber,Wang,Croxford,Velde does not teach the Markush element. ROBERT teaches Markush alternative (A): (A) Mexican hat filtering (comprised by “ ‘mexican hat’” “filter”, pg. 3, 7th txt blk). Since Huber of the combination (illustrated above) of Huber,Wang,Croxford, Velde teaches a filter, one of skill in filters can make Huber’s of the combination (illustrated above) of Huber,Wang,Croxford,Velde be as ROBERT’s seeing the change as “A better resolution”, ROBERTS, pg. 3, 7th txt blk. Re 6. (Currently Amended), Huber of the combination (illustrated above) of Huber,Wang,Croxford,Velde,ROBERT teaches The method of [[any one of]] claim[[s 1 to]] 5, comprising previously: • obtaining (T1) from a plurality (via fig. 3:302: “Obtain Untampered Digital Images”) of training identity documents (resulting in “generating a collection of training images that can be used to train an image classifier (e.g., step 202 of process 200)”, Huber: [0048]), forged or not, a plurality of filtered (or “augmented” [0053] 1st S via “filtering effects” [0052] last S: fig. 3) identity document images (fig. 3:312: “Generate Training Digital Images Based on the Augmented Digital Images”), comprising, for each training identity document of said plurality of training identity documents (via fig. 3:302: “Obtain Untampered Digital Images”): - selecting regions of interest (via “identifying43 a region of interest”, [0018] 1st S, as44 in the role, function or status of “a subset of the pixel array”, corresponding to “identify45…with respect to certain high value regions”, Huber [0048], last S) of a captured image of at least part of said training identity document, said regions of interest comprising at least one edge (“angle” Huber: [0015] last S: fig. 6B:600b: crooked picture) of said security (or “features”46 [0040] 2nd S) element , - creating a composite (via “ ‘photo splicing’ “ [0050] 3rd S: fig. 3:304: “Generate First Tampered Digital Images Based on the Untampered Digital Images”) identity document image from said selected regions of interest (such that “The identified region may then be selectively tampered”, Huber [0057] 3rd S, (i.e., spliced)) using said predetermined process of images combination (resulting in a “splice” “photo” “image” [0008] penult S: fig. 6B:602’: a new high value identification region), - generating (“of the untampered/tampered digital images” Huber: [0052] 1st S, i.e., an identified/selected-region splice image) a filtered image (via “augment digital images via…filtering effects”, Huber: [0052] last two Ss) by applying, to said composite identity document image (via “ ‘photo splicing’ “ [0050] 3rd S: fig. 3:304: “Generate First Tampered Digital Images Based on the Untampered Digital Images”), a digital image filtering (via “augment digital images via…filtering effects”, Huber: [0052] last two Ss) revealing geometrical objects (via “Enhancements may also include… other filters and effects to boost dark areas and soften lighter ones.” [0042] 7th S: fig. 10B:1016: “Layer 7 Final Compressed Image”) created by a replacement (via “the first set of tampered digital images may be generated by: (i) replacing an original photo in a digital image of a physical credential with a different photo (“photo splicing”)” [0050] 2nd S: fig. 10B:1018: “Layer 8 Tampered Image”) or a displacement of a security element (or “features”47 [0040] 2nd S) in said training identity document, • creating (T2) said training data set (“that can be used to train an image classifier (e.g., step 202 of process 200).”, Huber: [0048] 1st S: fig. 5: “Training Set”) by recording (via data records) in the training data set (via “a training dataset48” [0014] 1st S) each obtained filtered image and for each filtered image (via “augment digital images via…filtering effects”, Huber: [0052] last two Ss) whether the identity document from which it has been obtained is forged or not (“that are either tampered or untampered, and labeled as such” [0034] 9th S); and wherein the step of training the dedicated neural network (“image classifier”, Huber: [0035] last S, via said make Huber’s be as Wang’s predictably recognizing in the change “a scalable parallel classification architecture can be efficiently implemented in hardware by replicating the same hardware block 19 that can be implement as an integrated circuit block with dedicated neural network computation units. In such design, a large computation resource that is typical required for a software implementation of a very deep neural network for a large group classification is not required since the shallower neural network used in the object discriminator design is easier to train to recognize one person.” Wang [0038] 4th S) with said training data set comprises, for each filtered image of the training data set: - collecting (fig. 4: “Augmented Training Images”) the geometrical objects (said geometric “areas”49 [0042] 2nd to last S: fig. 10B:1016: “Layer 7 Final Compressed Image”: fig. 4: “Augmentation Geometric”) in the filtered image (via “augment digital images via…filtering effects”, Huber: [0052] last two Ss), - classifying (via an image-classifier-detector classifying the greatest dimensional geometric area pixel-features50 via “[0043] The seven layers described above represent exemplary intrinsic pixel features that may be perceived by image classifiers (as well as other types of tamper detectors) and identified51 as associated with natural or untampered imaged identification documents.”) said geometrical objects (said geometric “areas”52 [0042] 2nd to last S: fig. 10B:1016: “Layer 7 Final Compressed Image”: fig. 4: “Augmentation Geometric”) collected in the filtered image as indicative or not of the forgery of the identity document from which the filtered image has been obtained. Re 7. (Currently Amended), Huber of the combination (illustrated above) of Huber,Wang,Croxford,Velde,ROBERTS teaches The method of [[any one of]] claim[[s 1 to]] 6, wherein said geometrical objects (said greatest-dimension geometric “areas”53 [0042] 2nd to last S: fig. 10B:1016: “Layer 7 Final Compressed Image”: fig. 4: “Augmentation Geometric”: classified via said image-classifier-detector: fig. 4: “Convolutional Neural Network”: “Trained Model”) comprise vertical and horizontal (as shown/understood in figures 6A,6B,6C) lines (or “photo”-“edges” [0037] 5th S) created by a replacement (via “the first set of tampered digital images may be generated by: (i) replacing an original photo in a digital image of a physical credential with a different photo (“photo splicing54”)” [0050] 2nd S: fig. 10B:1018: “Layer 8 Tampered Image”) or a displacement of a security (or “features”55 [0040] 2nd S) element. Re 8. (Currently Amended), Huber of the combination (illustrated above) of Huber,Wang,Croxford,Velde,ROBERTS teaches The method of claim 6 [[or 7]], wherein each identity document (or “identification documents” [0034] 9th S) belongs to a document class (or “document” “label” [0034] 8th S via fig. 8:828: “Classification Result”) among a plurality of document (“untampered”/ ”tampered”) classes (via “the image classifier may label the image of the identification documented as “tampered” or “untampered” based on a number of intrinsic features obtained from a pixel-level analysis of the digital imaged identification document.” [0034] 8th S) each characterized (via pixel-area-features) by at least one of a type of56 (A) identity document (fig. 6A: “AUTO DRIVER’S LICENSE”)57, (B) an originating country of identity document, (C) a version number of identity document, comprising a determination (“untampered”/ ”tampered”) by said dedicated neural network (“image classifier”, Huber: [0035] last S, via said make Huber’s be as Wang’s predictably recognizing in the change “a scalable parallel classification architecture can be efficiently implemented in hardware by replicating the same hardware block 19 that can be implement as an integrated circuit block with dedicated neural network computation units. In such design, a large computation resource that is typical required for a software implementation of a very deep neural network for a large group classification is not required since the shallower neural network used in the object discriminator design is easier to train to recognize one person.” Wang [0038] 4th S) of the document class (or “document” “label” [0034] 8th S via fig. 8:828: “Classification Result”) of said identity document (or “identification documents” [0034] 9th S) and wherein the step d) of detecting (via a “tamper detector” [0033] 2nd : fig. 8:814: “Tampering Detection System”) a forgery (via “the forger may utilize special software in an attempt to meticulously recreate backgrounds, security features, and the like. As yet another example, the forger may attempt to homogenize the modified portions of the image by taking a new live photo of a printout or screenshot of the splice or tamper.” [0008] penult S) of the identity document by said dedicated neural network (“image classifier”, Huber: [0035] last S, via said make Huber’s be as Wang’s predictably recognizing in the change “a scalable parallel classification architecture can be efficiently implemented in hardware by replicating the same hardware block 19 that can be implement as an integrated circuit block with dedicated neural network computation units. In such design, a large computation resource that is typical required for a software implementation of a very deep neural network for a large group classification is not required since the shallower neural network used in the object discriminator design is easier to train to recognize one person.” Wang [0038] 4th S) depends on the determined (“untampered”/ ”tampered”) document class (or “document” “label” [0034] 8th S via fig. 8:828: “Classification Result”) for said identity document (or “identification documents” [0034] 9th S). Conclusion The prior art “nearest to the subject matter defined in the claims” (MPEP 707.05) made of record and not relied upon is considered pertinent to applicant's disclosure. The following table lists several references that are relevant to the subject matter claimed and disclosed in this Application. The references are not relied on by the Examiner, but are provided to assist the Applicant in responding to this Office action. Citation Relevance Lohweg et al. (US 2010/0195894 A1) Lohweg teaches a neural net, selecting a region of interest, “visual”-“security features”, and digital filtering via: [0018]: A neural network is exploited to output a discrimination result as to whether or not the attribute of the image in the block area is a halftone dot image based on the spatial frequency spectrum outputted from the Fourier transformation. [0038] Maximization of the authentication rating is achieved by ensuring that the selected region of interest includes a high density (high spatial frequency) of patterns (preferably linear or curvilinear intaglio-printed patterns). The patterns can in particular be patterns of a pictorial representation, such as a portrait, provided on the candidate document. “[0073] The present invention stems from the observation that security features58 printed, applied or otherwise provided on security documents using the specific production processes that are only available to the security printer, in particular intaglio-printed features, exhibit highly characteristic visual features (hereinafter referred to as "intrinsic features") that are recognizable by a qualified person having knowledge about the specific production processes involved.” [0090]: “a one-level digital filter bank comprising a low-pass filter with function h(n) and a high-pass filter with function g(n) which split the signal scale/spectrum in two parts of equal spectral range” as the closest to the claimed “dedicated neural network” “ selecting (D1) regions of interest” “visual security element” “digital image filtering” of claim 1. KOMAROV et al. (CN 103814400 A) with SEARCH machine translation: corresponds to: Komarov et al. ( US 9,715,635 B2): did not appear as a search result via SEARCH (at L17): there may be a possible translation difference (synonyms): “optically capturable document features” (in US 9,715,635: [0002]) vs. “visual file characteristic” (in CN 103814400 A: [0002]) KOMAROV (CN 103814400 A) teaches “optical”-“security feature”; “neural network” ; “region is selected”; “digital format”-“filtering”: “For example, a file characteristic may be a security feature file, a security feature59 on the document is optically collected.”, pg. 2: [0008], last S; “In order to identify the mode, the server can be used for executing an algorithm, such as a neural network (NN), or the algorithm-based method, the server can use one or more example classification and support vector machine (SVM), through training to improve the capability of classifying the data.”, pg. 3: [0016] last S; “If the frequency is present, correlation of the characteristic region is selected in step 615. otherwise, the method further performs the step 603.”, pg. 9: [0063] last S “[0066] after step 701, in step 705 detects the availability of resources. if there is enough available resource for local file identification, in step 707, performing data60 filtering conversion. otherwise, the method continues to step 703.”, pg. 9 as the closest to the claimed “visual security element” “dedicated neural network” “selecting (D1) regions of interest” “digital image filtering” of claim 1. Komarov et al. (US 9,715,635 B2): mentioned above Komarov teaches “the associated characteristic regions are selected in step 615.”, c.10,ll. 5-10, as the closest to the claimed “selecting (D1) regions of interest” of claim 1. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENNIS ROSARIO whose telephone number is (571)272-7397. The examiner can normally be reached Monday-Friday, 9AM-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DENNIS ROSARIO/Examiner, Art Unit 2676 /Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676 1 atmosphere: the gaseous envelope surrounding the earth; the air. (Dictionary.com) 2 by: accompanied with or in the atmosphere of, wherein accompanied is defined: to go along or in company with; join in action, wherein join is defined: to bring together in a particular relation or for a specific purpose, action, etc.; unite, wherein unite is defined: to join, combine, or incorporate so as to form a single whole or unit, wherein incorporate is defined: to embody; exemplify, wherein embody is defined: to embrace or comprise, wherein comprise is defined: to include or contain, wherein contain is defined: to be equal to (Dictionary.com) 3 product: a person or thing produced by or resulting from a process, as a natural, social, or historical one; result. (Dictionary.com) 4 “directly” is an adverb: Grammar. any member of a class of words that function as modifiers of verbs or clauses, and in some languages, as Latin and English, as modifiers of adjectives (“computer program” of “computer program product”), other adverbs, or adverbial phrases, as very in very nice, much in much more impressive, and tomorrow in She'll write to you tomorrow. They relate to what they modify by indicating place (I promise to bethere ), time (Do your homeworknow! ), manner (She singsbeautifully ), circumstance (He accidentally dropped the glass when the bell rang), degree (I'm very happy to see you), or cause (I draw, although badly). (Dictionary.com) 5 by: accompanied with or in the atmosphere of, wherein accompanied is defined: to go along or in company with; join in action, wherein join is defined: to bring together in a particular relation or for a specific purpose, action, etc.; unite, wherein unite is defined: to join, combine, or incorporate so as to form a single whole or unit, wherein incorporate is defined: to embody; exemplify, wherein embody is defined: to embrace or comprise, wherein comprise is defined: to include or contain, wherein contain is defined: to be equal to (Dictionary.com) 6 product: a person or thing produced by or resulting from a process, as a natural, social, or historical one; result. (Dictionary.com) 7 “directly” is an adverb: Grammar. any member of a class of words that function as modifiers of verbs or clauses, and in some languages, as Latin and English, as modifiers of adjectives (“computer program” of “computer program product”), other adverbs, or adverbial phrases, as very in very nice, much in much more impressive, and tomorrow in She'll write to you tomorrow. They relate to what they modify by indicating place (I promise to bethere ), time (Do your homeworknow! ), manner (She singsbeautifully ), circumstance (He accidentally dropped the glass when the bell rang), degree (I'm very happy to see you), or cause (I draw, although badly). (Dictionary.com) 8 background: one's origin, education, experience, etc., in relation to one's present character, status, etc., wherein experience is defined: knowledge or practical wisdom gained from what one has observed, encountered, or undergone, wherein practical is defined: of or relating to practice or action, wherein practice is defined: custom, wherein custom is defined: convention, wherein convention is defined: conventionalism, wherein conventionalism is defined: adherence to or advocacy of conventional attitudes or practices (Dictionary.com) 9 feature: a prominent or conspicuous part or characteristic, wherein part is defined: a portion or division of a whole that is separate or distinct; piece, fragment, fraction, or section; constituent, wherein constituent is defined: an element, material, etc. that is part of something else; component. (Dictionary.com) 10 or: (used to connect words, phrases, or clauses representing alternatives), wherein alternative is defined: one of the things, propositions, or courses of action that can be chosen. (Dictionary.com) 11 area: the quantitative measure of a plane or curved surface; two-dimensional extent, wherein two-dimensional is defined: having the dimensions of height and width only, wherein dimensions is defined: Usually dimensions. measurement in length, width, and thickness, wherein length is defined: the measure of the greatest dimension of a plane or solid figure, wherein figure is defined: Geometry. a combination of geometric elements disposed in a particular form or shape, wherein extent is defined: something extended, as a space; a particular length, area, or volume; something having extension, wherein thing is defined: a material object without life or consciousness; an inanimate object. (Dictionary.com) 12 area: the quantitative measure of a plane or curved surface; two-dimensional extent, wherein two-dimensional is defined: having the dimensions of height and width only, wherein dimensions is defined: Usually dimensions. measurement in length, width, and thickness, wherein length is defined: the measure of the greatest dimension of a plane or solid figure, wherein figure is defined: Geometry. a combination of geometric elements disposed in a particular form or shape. (Dictionary.com) 13 “is” essentially means look at a figure (Dictionary.com) 14 area: the quantitative measure of a plane or curved surface; two-dimensional extent, wherein two-dimensional is defined: having the dimensions of height and width only, wherein dimensions is defined: Usually dimensions. measurement in length, width, and thickness, wherein length is defined: the measure of the greatest dimension of a plane or solid figure, wherein figure is defined: Geometry. a combination of geometric elements disposed in a particular form or shape. (Dictionary.com) 15 (italics) represent claim limitations already taught 16 (italics) represent claim limitations already taught 17 “producing” is directed to the claimed “set” 18 “producing” is directed to the claimed “set” 19 “producing” is directed to the claimed “set” 20 generate: to bring into existence; cause to be; produce. (Dictionary.com) 21 feature: a prominent or conspicuous part or characteristic, wherein part is defined: a portion or division of a whole that is separate or distinct; piece, fragment, fraction, or section; constituent, wherein constituent is defined: an element, material, etc. that is part of something else; component. 22 region: an extensive, continuous part of a surface, space, or body, wherein part is defined: a portion or division of a whole that is separate or distinct; piece, fragment, fraction, or section; constituent, wherein constituent is defined: an element, material, etc. that is part of something else; component. (Dictionary.com) 23 “is” essentially means look at a figure (Dictionary.com) 24 area: the quantitative measure of a plane or curved surface; two-dimensional extent, wherein two-dimensional is defined: having the dimensions of height and width only, wherein dimensions is defined: Usually dimensions. measurement in length, width, and thickness, wherein length is defined: the measure of the greatest dimension of a plane or solid figure, wherein figure is defined: Geometry. a combination of geometric elements disposed in a particular form or shape. (Dictionary.com) 25 based (VERB (USED WITH OBJECT): “said digital image filtering”): to place or establish on a base or basis; ground; found (usually followed by on orupon ). (Dictionary.com) 26 on: in connection, association, or cooperation with; as a part or element of. (Dictionary.com) 27 Markush element follows: [(A) and (B)] 28 on: in connection, association, or cooperation with; as a part or element of. (Dictionary.com) 29 CLAIM SCOPE: and: (used to connect grammatically coordinate words, phrases, or clauses) along or together with; as well as; in addition to; besides; also; moreover. (Dictionary.com) 30 CLAIM SCOPE: BROADEST REASONABLE INTERPRETATION in view of applicant’s disclosure: and (used to connect alternatives). (Dictionary.com) 31 computer: a programmable electronic device designed to accept data, perform prescribed mathematical and logical operations at high speed, and display the results of these operations, wherein operations is defined: Mathematics. a mathematical process, as addition, multiplication, or differentiation, wherein addition is defined: the process of uniting two or more numbers into one sum, represented by the symbol +, wherein sum is defined: the aggregate of two or more numbers, magnitudes, quantities, or particulars as determined by or as if by the mathematical process of addition., wherein determined is defined: to conclude or ascertain, as after reasoning, observation, etc., wherein ascertain is defined: to find out definitely; learn with certainty or assurance; determine, wherein find is defined: to ascertain by study or calculation, wherein calculation is defined: the act or process of calculating; computation, wherein at is defined: (used to indicate amount, degree, or rate), wherein amount is defined: quantity; measure. (Dictionary.com): for each computer 32 on: in connection, association, or cooperation with; as a part or element of. (Dictionary.com) 33 Markush element follows: [(A) and (B)] 34 CLAIM SCOPE: and: (used to connect grammatically coordinate words, phrases, or clauses) along or together with; as well as; in addition to; besides; also; moreover. (Dictionary.com): this definition-sense of “and” is not the broadest reasonable interpretation of claim 4. 35 CLAIM SCOPE: BROADEST REASONABLE INTERPRETATION in view of applicant’s disclosure: and (used to connect alternatives). (Dictionary.com) 36 on: in connection, association, or cooperation with; as a part or element of. (Dictionary.com) 37 Markush element follows: [(A) and (B)] 38 Since Velde teaches Markush alternative (A), the Markush element [(A) and (B)] is taught under the broadest reasonable interpretation of claim 4. 39 CLAIM SCOPE: and: (used to connect grammatically coordinate words, phrases, or clauses) along or together with; as well as; in addition to; besides; also; moreover. (Dictionary.com): this definition-sense of “and” is not the broadest reasonable interpretation of claim 4. 40 CLAIM SCOPE: BROADEST REASONABLE INTERPRETATION in view of applicant’s disclosure: and (used to connect alternatives). (Dictionary.com) 41 Since Velde teaches Markush alternative (B), the Markush element [(A) and (B)] is taught under the broadest reasonable interpretation of claim 4. 42 Markush element follows: [(A) (B) (C) (D) (E) (F)] 43 identify: to make, represent to be, or regard or treat as the same or identical, where make is defined: to appoint or name, wherein appoint is defined: to name or assign to a position, an office, or the like; designate, wherein designate is defined: to nominate or select for a duty, office, purpose, etc. (“as a subset of the pixel array”); appoint; assign.(Dictionary.com) 44 as: in the role, function, or status of. (Dictionary.com) 45 identify: to make, represent to be, or regard or treat as the same or identical, where make is defined: to appoint or name, wherein appoint is defined: to determine by authority or agreement; fix; set., wherein set is defined: to put in the proper or desired order or condition for use, wherein desired in defined: deemed correct or proper; selected; required. (Dictionary.com) 46 feature: a prominent or conspicuous part or characteristic, wherein part is defined: a portion or division of a whole that is separate or distinct; piece, fragment, fraction, or section; constituent, wherein constituent is defined: an element, material, etc. that is part of something else; component. (Dictionary.com) 47 feature: a prominent or conspicuous part or characteristic, wherein part is defined: a portion or division of a whole that is separate or distinct; piece, fragment, fraction, or section; constituent, wherein constituent is defined: an element, material, etc. that is part of something else; component. (Dictionary.com) 48 dataset: Computers. a collection of data records for computer processing. 49 area: the quantitative measure of a plane or curved surface; two-dimensional extent, wherein two-dimensional is defined: having the dimensions of height and width only, wherein dimensions is defined: Usually dimensions. measurement in length, width, and thickness, wherein length is defined: the measure of the greatest dimension of a plane or solid figure, wherein (greatest) dimension is defined: an aspect, feature, or angle, wherein figure is defined: Geometry. a combination of geometric elements disposed in a particular form or shape, wherein extent is defined: something extended, as a space; a particular length, area, or volume; something having extension, wherein thing is defined: a material object without life or consciousness; an inanimate object. (Dictionary.com) 50 feature: a prominent or conspicuous part or characteristic, where part is defined: a portion or division of a whole that is separate or distinct; piece, fragment, fraction, or section; constituent, wherein section is defined: shape, wherein shape is defined: the quality of a distinct object or body in having an external surface or outline of specific form or figure, wherein form is defined: external appearance of a clearly defined area, as distinguished from color or material; configuration. (Dictionary.com) 51 identified: to recognize or establish as being a particular person or thing; verify the identity of., wherein particular is defined: of or relating to a single or specific person, thing, group, class, occasion, etc., rather than to others or all; special rather than general. (Dictionary.com) 52 area: the quantitative measure of a plane or curved surface; two-dimensional extent, wherein two-dimensional is defined: having the dimensions of height and width only, wherein dimensions is defined: Usually dimensions. measurement in length, width, and thickness, wherein length is defined: the measure of the greatest dimension of a plane or solid figure, wherein (greatest) dimension is defined: an aspect, feature, or angle, wherein figure is defined: Geometry. a combination of geometric elements disposed in a particular form or shape, wherein extent is defined: something extended, as a space; a particular length, area, or volume; something having extension, wherein thing is defined: a material object without life or consciousness; an inanimate object. (Dictionary.com) 53 area: the quantitative measure of a plane or curved surface; two-dimensional extent, wherein two-dimensional is defined: having the dimensions of height and width only, wherein dimensions is defined: Usually dimensions. measurement in length, width, and thickness, wherein length is defined: the measure of the greatest dimension of a plane or solid figure, wherein (greatest) dimension is defined: an aspect, feature, or angle, wherein figure is defined: Geometry. a combination of geometric elements disposed in a particular form or shape, wherein extent is defined: something extended, as a space; a particular length, area, or volume; something having extension, wherein thing is defined: a material object without life or consciousness; an inanimate object. (Dictionary.com) 54 splice: to join or unite, wherein unite is defined: to join, combine, or incorporate so as to form a single whole or unit, wherein form is defined: to construct or frame, wherein construct is defined: to build or form by putting together parts; frame; devise, wherein build is defined: to mold, form, or create. (Dictionary.com) 55 feature: a prominent or conspicuous part or characteristic, wherein part is defined: a portion or division of a whole that is separate or distinct; piece, fragment, fraction, or section; constituent, wherein constituent is defined: an element, material, etc. that is part of something else; component. (Dictionary.com) 56 Markush element follows: [(A) (B) (C)] 57 Since Markush alternative (A) is taught, the Markush element [(A)(B)(C)] is also taught under the broadest reasonable interpretation of claim 8 58 feature: a prominent or conspicuous part or characteristic, wherein part is defined: a portion or division of a whole that is separate or distinct; piece, fragment, fraction, or section; constituent, wherein constituent is defined: an element, material, etc. that is part of something else; component. (Dictionary.com) 59 feature: a prominent or conspicuous part or characteristic, wherein part is defined: a portion or division of a whole that is separate or distinct; piece, fragment, fraction, or section; constituent, wherein constituent is defined: an element, material, etc. that is part of something else; component. (Dictionary.com) 60 data: (usually used with a singular verb) information in digital format, as encoded text or numbers, or multimedia images, audio, or video. (Dictionary.com)
Read full office action

Prosecution Timeline

Feb 28, 2024
Application Filed
Jan 05, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586184
METHODS AND APPARATUS FOR ANALYZING PATHOLOGY PATTERNS OF WHOLE-SLIDE IMAGES BASED ON GRAPH DEEP LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12585733
SYSTEMS AND METHODS OF SENSOR DATA FUSION
2y 5m to grant Granted Mar 24, 2026
Patent 12536786
IMAGE LOCALIZATION USING A DIGITAL TWIN REPRESENTATION OF AN ENVIRONMENT
2y 5m to grant Granted Jan 27, 2026
Patent 12518519
PREDICTOR CREATION DEVICE AND PREDICTOR CREATION METHOD
2y 5m to grant Granted Jan 06, 2026
Patent 12518404
SYSTEMS AND METHODS FOR MACHINE LEARNING BASED PHYSIOLOGICAL MOTION MEASUREMENT
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
98%
With Interview (+28.6%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 557 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month