Prosecution Insights
Last updated: April 19, 2026
Application No. 18/536,895

MACHINE LEARNING BASED SEAL DETECTION AND AUTHENTICATION

Non-Final OA §103§112
Filed
Dec 12, 2023
Examiner
SOFRONIOU, MICHAEL MARIO
Art Unit
2661
Tech Center
2600 — Communications
Assignee
SAP SE
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
11 currently pending
Career history
11
Total Applications
across all art units

Statute-Specific Performance

§101
10.8%
-29.2% vs TC avg
§103
37.8%
-2.2% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
35.1%
-4.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Typographic Conventions Throughout this office action, shorthand notation for referencing locations of elements in documents are utilized. The following is a brief summary of the shorthand utilized: Sec. – is used to denote an associated section with a header in non-patent literature ¶ – is used to denote the number and location of a paragraph col. – is used to denote a column number ln. – is used to denote a line; if a line number is not demarcated in a document, the line number will be assumed to start at 1 for each paragraph. Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: process 400 (Fig. 4). Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Claims 6,7, 14 & 15 are objected to because of the following informalities described below. Appropriate correction is required. Regarding claims 6 & 14, the applicant recites the phrase “wherein the background noise is using a second filtering algorithm if the extracted seal is in gray scale”. The examiner believes the word “removed” between “is” and “using” was accidentally removed. Further clarification is required. Regarding claim 7 & 15, the applicant recites the phrase “in response transforming, registering, using the one or more scale invariant feature transform (SIFT) features, the transformed, extracted seal and the transformed model seal to form the residual image.” It is unclear if the applicant intended to say “in response to transforming, registering…” or “in response to the transforming, registering…”to clarify that the aforementioned limitation is in response to the implementation of the prior limitation described in claim 7. Further clarification is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 16 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 16 recites the limitation "residual image" in ln. 2. There is insufficient antecedent basis for this limitation in the claim. Claim 16 is indicated to have a dependency on claim 9, which has no explicit recitation of “the residual image”. It appears as though claim 16 may have been intended to depend from claim 15 that first recites the limitation of “the residual image”. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3,9-11,15-19 are rejected under 35 U.S.C. 103 as being unpatentable over Rasheed et al (US 2024/0161463 A1) in view of Tan et al (“Seal Imprint Verification Using SVM Classifier and Unmatched Key Point Features”, 2021, IEEE). Regarding claim 1, Rasheed et al disclose a system and method for detecting and validating the similarity between two stamps or seals. More specifically, Rasheed et al teach A computer-implemented method, comprising: receiving, by a machine learning model, one more more training documents (routine 100 outlines a method for detecting a seal or stamp, where in step 102 a document is received via a neural network [¶0101-104; Fig. 1], with routine 200 outlining the generation of training dataset by intaking a collection of digital stamp patterns at steps 202 & 204 [¶0105-0109; Fig. 2]) and one or more labels indicating whether the one or more training documents include a seal (pair labels showing whether documents contain the same stamp may be labeled one-way (e.g., with a “1”) and pairs showing different or no stamps may be labeled another (e.g., with a “0”) [0072]); training, using the one or more training documents and the one or more labels, the machine learning model to perform a task of detecting seals in one or more documents (routine 900 outlines a method of configuring a neural network for detecting a stamp, where images are labeled as positive or negative image pairs in accordance with their stamp labels [¶0138-0140; Fig. 9]); receiving, by the trained machine learning model, a document to be authenticated (routine 100 outlines a method for detecting a seal or stamp, where in step 102 a document is received via a neural network [¶0101-104; Fig. 1]); detecting, by the trained machine learning model, whether the document contains a seal (in step 104 the input page is configured to recognize a copy-guard digital stamp pattern [¶0102; Fig. 1]); in response to detecting the seal, providing the seal extracted by the trained machine learning model for authentication (routine 3300 outline the network training and validation processes for comparing pattern matches [¶0253-263; Fig. 33A]); and providing the similarity score as an indication of whether the extracted seal is authentic. (digital image comparator 2700 generates a similarity score 2748 from local feature maps of two stamps [¶0226; Fig. 27]). Rasheed et al teaches authenticating the extracted stamp or seal and determining a similarity score, but do not teach extracting the seal in the scale invariant domain to do so. Tan et al, however, are analogous art in the same field of endeavor as the present application and also disclose a method for detecting and validating the authenticity of seal in a document. More specifically, Tan et al teach authenticating the extracted seal in a scale invariant domain by at least using a comparison of the extracted seal and a model seal to determine a similarity score; (Tan et al: where seals are aligned using a scale-invariant feature transform (SIFT) [Sec. III-B; Fig. 4], with a normalized Euclidian distance value indicating the difference between left and right difference images indicating a value of similarity [Sec. III-D; Fig. 5]). Tan et al also disclose how their model, performs more accurately than that of 3 other models provided in the literature of the art [Tan et al: Sec. IV-D; Fig. 7], outperforming Ref[13] (Su et al, “Automatic Seal Imprint Verification Systems Using Edge Difference”, 2019, IEEE Access), which discloses a method for verifying a seal without the implementation of SIFT in feature matching. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take the method of seal authentication disclosed by Rasheed et al implement SIFT feature matching provided by Tan et al for improved authentication accuracy to arrive at invention of the present application. Regarding claim 2, Rasheed et al in view of Tan et al teach The computer-implemented method of claim 1, (as described above). Rasheed et al also disclose the machine learning model of claim 1 comprising a convolution neural network. More specifically, Rasheed et al teach wherein the machine learning model comprises a convolutional neural network (Rasheed et al: digital image comparator 2700 comprises a first stage utilizing a faster regional proposal convolutional neural network (R-CNN 2704) that includes a region proposal network (RPN 3000) [¶0218; Fig. 27]). Regarding claim 3, Rasheed et al in view of Tan et al teach The computer-implemented method of claim 1, (as previously described). Rasheed et al also disclose the machine learning model of claim 1 comprising a faster region proposal network. More specifically, Rasheed et al teach wherein the machine learning model comprises a faster regional proposal network (Rasheed et al: digital image comparator 2700 comprises a first stage utilizing a high-performance (i.e. faster) regional proposal convolutional neural network (R-CNN 2704) that includes a region proposal network (RPN 3000) [¶0218; Fig. 27]). Regarding claim 9, Rasheed et al in view of Tan et al teach A system comprising at least one processor (Rasheed et al: processor(s) 612 [¶0117; Fig. 6]) and at least one memory including instructions, which when executed causes operations (Rasheed et al: volatile (616) and non-volatile (618) memory may store computer-executable instructions 620 to implement embodiments of the disclosed method [¶0118; Fig. 6]) comprising: receiving, by a machine learning model, one or more training documents (Rasheed et al: routine 100 outlines a method for detecting a seal or stamp, where in step 102 a document is received via a neural network [¶0101-104; Fig. 1], with routine 200 outlining the generation of training dataset by intaking a collection of digital stamp patterns at steps 202 & 204 [¶0105-0109; Fig. 2]) and one or more labels indicating whether the one or more training documents include a seal (Rasheed et al: pair labels showing whether documents contain the same stamp may be labeled one-way (e.g., with a “1”) and pairs showing different or no stamps may be labeled another (e.g., with a “0”) [¶0072]); training, using the one or more training documents and the one or more labels, the machine learning model to perform a task of detecting seals in one or more documents (Rasheed et al: routine 900 outlines a method of configuring a neural network for detecting a stamp, where images are labeled as positive or negative image pairs in accordance with their stamp labels [¶0138-0140; Fig. 9]); receiving, by the trained machine learning model, a document to be authenticated (Rasheed et al: routine 100 outlines a method for detecting a seal or stamp, where in step 102 a document is received via a neural network [¶0101-104; Fig. 1]); detecting, by the trained machine learning model, whether the document contains a seal (Rasheed et al: in step 104 the input page is configured to recognize a copy-guard digital stamp pattern [¶0102; Fig. 1]); and providing the similarity score as an indication of whether the extracted seal is authentic (Rasheed et al: digital image comparator 2700 generates a similarity score 2748 from local feature maps of two stamps [¶0226; Fig. 27]). Rasheed et al teach authenticating the extracted stamp or seal and determining a similarity score, but do not teach extracting the seal in the scale invariant domain to do so. Tan et al, however, are analogous art in the same field of endeavor as the present application and also disclose a method for detecting and validating the authenticity of seal in a document. More specifically, Tan et al teach authenticating the extracted seal in a scale invariant domain by at least using a comparison of the extracted seal and a model seal to determine a similarity score; (Tan et al: where seals are aligned using a scale-invariant feature transform (SIFT) [Sec. III-B; Fig. 4], with a normalized Euclidian distance value indicating the difference between left and right difference images indicating a value of similarity [Sec. III-D; Fig. 5]). Tan et al also disclose how their model, performs more accurately than that of 3 other models provided in the literature of the art [Tan et al: Sec. IV-D; Fig. 7], outperforming Ref[13] (Su et al, “Automatic Seal Imprint Verification Systems Using Edge Difference”, 2019, IEEE Access), which discloses a method for verifying a seal without the implementation of SIFT in feature matching. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take the method of seal authentication disclosed by Rasheed et al implement SIFT feature matching provided by Tan et al for improved authentication accuracy to arrive at invention of the present application. Regarding claim 10, Rasheed et al in view of Tan et al teach The system of claim 9, (as described above). Rasheed et al also disclose the machine learning model of claim 9 comprising a convolution neural network. More specifically, Rasheed et al teach wherein the machine learning model comprises a convolutional neural network (Rasheed et al: digital image comparator 2700 comprises a first stage utilizing a faster regional proposal convolutional neural network (R-CNN 2704) that includes a region proposal network (RPN 3000) [¶0218; Fig. 27]). Regarding claim 11, Rasheed et al in view of Tan et al teach The system of claim 9, (as previously described). Rasheed et al also disclose the machine learning model of claim 9 comprising a faster region proposal network. More specifically, Tan et al teach wherein the machine learning model comprises a faster regional proposal network (Rasheed et al: digital image comparator 2700 comprises a first stage utilizing a (high performance (i.e. faster) regional proposal convolutional neural network (R-CNN 2704) that includes a region proposal network (RPN 3000) [¶0218; Fig. 27]). Regarding claim 15, Rasheed et al in view of Tan et al teach The system of claim 9 (as previously described). Tan et al also disclose transforming the extracted and model seal using one or more SIFT features and forming a residual image. More specifically, Tan et al teach the system further comprising: transforming the extracted seal and the model seal into the scale invariant domain using one or more scale invariant feature transform (SIFT) features of the extracted seal and the model seal (Tan et al: where seals are aligned using a scale-invariant feature transform (SIFT) [Sec. III-B; Fig. 4]); and in response transforming, registering, using the one or more scale invariant feature transform (SIFT) features, the transformed, extracted seal and the transformed model seal to form a residual image (Tan et al: left and right difference images indicating a value of similarity [Sec. III-D; Fig. 5]). Tan et al also disclose how their model, performs more accurately than that of 3 other models provided in the literature of the art [Tan et al: Sec. IV-D; Fig. 7], outperforming Ref[13] (Su et al, “Automatic Seal Imprint Verification Systems Using Edge Difference”, 2019, IEEE Access), which discloses a method for verifying a seal without the implementation of SIFT in feature matching. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take the method of seal authentication disclosed by Rasheed et al implement SIFT feature matching provided by Tan et al for improved authentication accuracy to arrive at invention of the present application. Regarding claim 16, Rasheed et al in view of Tan et al teach The system of claim 9 (as previously described). Tan et al further disclose a system for authenticating a seal wherein a similarity score based on the residual image is determined. More specifically, Tan et al teach the system further comprising: determining the similarity score based on the residual image (Tan et al: with a normalized Euclidian distance value indicating the difference between left and right difference images indicating a value of similarity [Sec. III-D; Fig. 5]). Tan et al also disclose that the normalized Euclidian distance was chosen as metric for computing the histogram distance metric (which functions as a similarity score) as it provides a stable support vector machine (SVM) classifier with good performance [Tan et al: Sec. IV-C]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take the method of seal authentication disclosed by Rasheed et al in view of Tan et al, further in view of Sudararam et al and utilize the method of computing a similarity score proposed by Tan et al to arrive at the invention of the present application. Regarding claim 17, Rasheed et al teach A non-transitory computer-storage medium including instructions (Rasheed et al: volatile (616) and non-volatile (618) memory may store computer-executable instructions 620 to implement embodiments of the disclosed method [¶0118; Fig. 6]), which when executed by at least one processor (Rasheed et al: processor(s) 612 [¶0117; Fig. 6]), causes operations comprising: receiving, by a machine learning model, one or more training documents (Rasheed et al: routine 100 outlines a method for detecting a seal or stamp, where in step 102 a document is received via a neural network [¶0101-104; Fig. 1], with routine 200 outlining the generation of training dataset by intaking a collection of digital stamp patterns at steps 202 & 204 [¶0105-0109; Fig. 2]) and one or more labels indicating whether the one or more training documents include a seal (Rasheed et al: pair labels showing whether documents contain the same stamp may be labeled one-way (e.g., with a “1”) and pairs showing different or no stamps may be labeled another (e.g., with a “0”) [¶0072]); training, using the one or more training documents and the one or more labels, the machine learning model to perform a task of detecting seals in one or more documents (Rasheed et al: routine 900 outlines a method of configuring a neural network for detecting a stamp, where images are labeled as positive or negative image pairs in accordance with their stamp labels [¶0138-0140; Fig. 9]); receiving, by the trained machine learning model, a document to be authenticated (Rasheed et al: routine 100 outlines a method for detecting a seal or stamp, where in step 102 a document is received via a neural network [¶0101-104; Fig. 1]); detecting, by the trained machine learning model, whether the document contains a seal (Rasheed et al: in step 104 the input page is configured to recognize a copy-guard digital stamp pattern [¶0102; Fig. 1]); in response to detecting the seal, providing the seal extracted by the trained machine learning model for authentication (routine 3300 outline the network training and validation processes for comparing pattern matches [¶0253-263; Fig. 33A]); and providing the similarity score as an indication of whether the extracted seal is authentic (digital image comparator 2700 generates a similarity score 2748 from local feature maps of two stamps [¶0226; Fig. 27]). Rasheed et al teach authenticating the extracted stamp or seal and determining a similarity score, but do not teach extracting the seal in the scale invariant domain to do so. Tan et al, however, are analogous art in the same field of endeavor as the present application and also disclose a method for detecting and validating the authenticity of seal in a document. More specifically, Tan et al teach authenticating the extracted seal in a scale invariant domain by at least using a comparison of the extracted seal and a model seal to determine a similarity score; (Tan et al: where seals are aligned using a scale-invariant feature transform (SIFT) [Sec. III-B; Fig. 4], with a normalized Euclidian distance value indicating the difference between left and right difference images indicating a value of similarity [Sec. III-D; Fig. 5]). Tan et al also disclose how their model, performs more accurately than that of 3 other models provided in the literature of the art [Tan et al: Sec. IV-D; Fig. 7], outperforming Ref[13] (Su et al, “Automatic Seal Imprint Verification Systems Using Edge Difference”, 2019, IEEE Access), which discloses a method for verifying a seal without the implementation of SIFT in feature matching. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take the method of seal authentication disclosed by Rasheed et al implement SIFT feature matching provided by Tan et al for improved authentication accuracy to arrive at invention of the present application. Regarding claim 18, Rasheed et al in view of Tan et al teach The non-transitory computer-storage medium of claim 17, (as described above). Rasheed et al also disclose the machine learning model of claim 9 comprising a convolution neural network. More specifically, Rasheed et al teach wherein the machine learning model comprises a convolutional neural network (Rasheed et al: digital image comparator 2700 comprises a first stage utilizing a faster regional proposal convolutional neural network (R-CNN 2704) that includes a region proposal network (RPN 3000) [¶0218; Fig. 27]). Regarding claim 19, Rasheed et al in view of Tan et al teach The non-transitory computer-storage medium of claim 17, (as previously described). Rasheed et al also disclose the machine learning model of claim 9 comprising a faster region proposal network. More specifically, Tan et al teach wherein the machine learning model comprises a faster regional proposal network (Rasheed et al: digital image comparator 2700 comprises a first stage utilizing a (faster regional proposal convolutional neural network (R-CNN 2704) that includes a region proposal network (RPN 3000) [¶0218; Fig. 27]). Claims 4-6, 7-8, 12-14 & 20 are rejected under 35 U.S.C. 103 as being unpatentable over Rasheed et al (US 2024/0161463 A1) in view of Tan et al (“Seal Imprint Verification Using SVM Classifier and Unmatched Key Point Features”, 2021, IEEE), further in view of Sundararam et al (WO 2022/035942 A1). Regarding claim 4, Rasheed et al in view of Tan et al teach The computer-implemented method of claim 1 (as previously described), but does not disclose preprocessing to remove background noise from the extracted seal. Sundararam et al, however, are analogous art in the same field of endeavor as the present application and also disclose a method authenticating a seal by preprocessing to remove background noise from the extracted seal. More specifically, Sundararam et al teach wherein the authenticating further comprises: preprocessing the extracted seal to remove background noise from the extracted seal (Sundararam et al: image pre-processor 1214 in Fig. 12 can apply a Gaussian noise filter on candidate images to reduce noise [pg. 90; ln. 23-24 & pg. 91; ln. 1-11]). Thus, in accordance with KSR rationales (see MPEP § 2143), the prior art includes all of the claimed elements in the present application, with the only difference being the lack of combination. Furthermore, one of ordinary skill in the art could have easily combined the elements by known methods and that each element would merely perform the same function as it does separately. For example, the inclusion of a Gaussian filter to remove background noise would provide for the extraction of a seal that would readily reduce noise and improve the overall accuracy. Finally, one of ordinary skill in the art would recognize that the result of the combination would be predictable, since the use of Gaussian filtering would merely reduce the background noise resulting in an improved process. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take the method of seal authentication proposed by Rasheed et al in view of Tan et al and implement the reduction of background noise using a Gaussian filter proposed by Sundaram et al to arrive at the invention of the present application. Regarding claim 5, Rasheed et al in view of Tan et al, further in view of Sundararam et al teach The computer-implemented method of claim 4, (as described above). Sundararam et al also disclose a method authenticating a seal by preprocessing to remove background noise from the extracted seal. More specifically, Sundararam et al teach wherein the background noise is removed using a first filtering algorithm if the extracted seal is in color (Sundararam et al: image pre-processor 1214 in Fig. 12 can convert the candidate image to grayscale and apply a Gaussian noise filter to reduce noise or smooth variations [pg. 90; ln. 23-24 & pg. 91; ln. 1-11]). Thus, in accordance with KSR rationales (see MPEP § 2143), the prior art includes all of the claimed elements in the present application, with the only difference being the lack of combination. Furthermore, one of ordinary skill in the art could have easily combined the elements by known methods and that each element would merely perform the same function as it does separately. For example, the inclusion of a Gaussian filter to remove background noise would provide for the extraction of a seal that would readily reduce noise and improve the overall accuracy. The conversion of the image to grayscale prior to Gaussian filtering simply reduces image dimensionality and aids processing efficiency. Finally, one of ordinary skill in the art would recognize that the result of the combination would be predictable, since the conversion to grayscale and use of Gaussian filtering would merely reduce the background noise resulting in an improved process. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take the method of seal authentication proposed by Rasheed et al in view of Tan et al and implement the conversion to grayscale and reduction of background noise using a Gaussian filter proposed by Sundaram et al to arrive at the invention of the present application. Regarding claim 6, Rasheed et al in view of Tan et al, further in view of Sudararam et al teach The computer-implemented method of claim 4, (as previously described). Sundararam et al also disclose removing background noise via a second filtering algorithm if the extracted seal is in gray scale. More specifically, Sundararam et al teach wherein the background noise is using a second filtering algorithm if the extracted seal is in gray scale (Sundararam et al: image pre-processor 1214 in Fig. 12 can apply a Gaussian noise filter on candidate images to reduce noise [pg. 90; ln. 23-24 & pg. 91; ln. 1-11]). Thus, in accordance with KSR rationales (see MPEP § 2143), the prior art includes all of the claimed elements in the present application, with the only difference being the lack of combination. Furthermore, one of ordinary skill in the art could have easily combined the elements by known methods and that each element would merely perform the same function as it does separately. For example, the inclusion of a Gaussian filter to remove background noise would provide for the extraction of a seal that would readily reduce noise and improve the overall accuracy. Finally, one of ordinary skill in the art would recognize that the result of the combination would be predictable, since the use of Gaussian filtering would merely reduce the background noise resulting in an improved process. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take the method of seal authentication proposed by Rasheed et al in view of Tan et al and implement the reduction of background noise for a gray scale seal using a Gaussian filter proposed by Sundaram et al to arrive at the invention of the present application. Regarding claim 7, Rasheed et al in view of Tan et al, further in view of Sudararam et al teach The computer-implemented method of claim 4 (as previously described). Tan et al also disclose transforming the extracted and model seal using one or more SIFT features and forming a residual image. More specifically, Tan et al teach the method further comprising: transforming the extracted seal and the model seal into the scale invariant domain using one or more scale invariant feature transform (SIFT) features of the extracted seal and the model seal (Tan et al: where seals are aligned using a scale-invariant feature transform (SIFT) [Sec. III-B; Fig. 4]); and in response transforming, registering, using the one or more scale invariant feature transform (SIFT) features, the transformed, extracted seal and the transformed model seal to form a residual image (Tan et al: left and right difference images indicating a value of similarity [Sec. III-D; Fig. 5]). Tan et al also disclose how their model, performs more accurately than that of 3 other models provided in the literature of the art [Tan et al: Sec. IV-D; Fig. 7], outperforming Ref[13] (Su et al, “Automatic Seal Imprint Verification Systems Using Edge Difference”, 2019, IEEE Access), which discloses a method for verifying a seal without the implementation of SIFT in feature matching. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take the method of seal authentication disclosed by Rasheed et al implement SIFT feature matching provided by Tan et al for improved authentication accuracy to arrive at invention of the present application. Regarding claim 8, Rasheed et al in view of Tan et al, further in view of Sudararam et al teach The computer-implemented method of claim 7 (as previously described). Tan et al further disclose a method of authenticating a seal wherein a similarity score based on the residual image is determined. More specifically, Tan et al teach the method further comprising: determining the similarity score based on the residual image (Tan et al: with a normalized Euclidian distance value indicating the difference between left and right difference images indicating a value of similarity [Sec. III-D; Fig. 5]). Tan et al also disclose that the normalized Euclidian distance was chosen as metric for computing the histogram distance metric (which functions as a similarity score) as it provides a stable support vector machine (SVM) classifier with good performance [Tan et al: Sec. IV-C]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take the method of seal authentication disclosed by Rasheed et al in view of Tan et al, further in view of Sudararam et al and utilize the method of computing a similarity score proposed by Tan et al to arrive at the invention of the present application. Regarding claim 12, Rasheed et al in view of Tan et al teach The system of claim 9 (as previously described), but does not disclose preprocessing to remove background noise from the extracted seal. Sundararam et al, however, are analogous art in the same field of endeavor as the present application and also disclose a method authenticating a seal by preprocessing to remove background noise from the extracted seal. More specifically, Sundararam et al teach wherein the authenticating further comprises: preprocessing the extracted seal to remove background noise from the extracted seal (Sundararam et al: image pre-processor 1214 in Fig. 12 can apply a Gaussian noise filter on candidate images to reduce noise [pg. 90; ln. 23-24 & pg. 91; ln. 1-11]). Thus, in accordance with KSR rationales (see MPEP § 2143), the prior art includes all of the claimed elements in the present application, with the only difference being the lack of combination. Furthermore, one of ordinary skill in the art could have easily combined the elements by known methods and that each element would merely perform the same function as it does separately. For example, the inclusion of a Gaussian filter to remove background noise would provide for the extraction of a seal that would readily reduce noise and improve the overall accuracy. Finally, one of ordinary skill in the art would recognize that the result of the combination would be predictable, since the use of Gaussian filtering would merely reduce the background noise resulting in an improved process. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take the method of seal authentication proposed by Rasheed et al in view of Tan et al and implement the reduction of background noise using a Gaussian filter proposed by Sundaram et al to arrive at the invention of the present application. Regarding claim 13, Rasheed et al in view of Tan et al, further in view of Sundararam et al teach The system of claim 12 (as described above). Sundararam et al also disclose a method authenticating a seal by preprocessing to remove background noise from the extracted seal. More specifically, Sundararam et al teach wherein the background noise is removed using a first filtering algorithm if the extracted seal is in color (Sundararam et al: image pre-processor 1214 in Fig. 12 can convert the candidate image to grayscale and apply a Gaussian noise filter to reduce noise or smooth variations [pg. 90; ln. 23-24 & pg. 91; ln. 1-11]). Thus, in accordance with KSR rationales (see MPEP § 2143), the prior art includes all of the claimed elements in the present application, with the only difference being the lack of combination. Furthermore, one of ordinary skill in the art could have easily combined the elements by known methods and that each element would merely perform the same function as it does separately. For example, the inclusion of a Gaussian filter to remove background noise would provide for the extraction of a seal that would readily reduce noise and improve the overall accuracy. The conversion of the image to grayscale prior to Gaussian filtering simply reduces image dimensionality and aids processing efficiency. Finally, one of ordinary skill in the art would recognize that the result of the combination would be predictable, since the conversion to grayscale and use of Gaussian filtering would merely reduce the background noise resulting in an improved process. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take the method of seal authentication proposed by Rasheed et al in view of Tan et al and implement the conversion to grayscale and reduction of background noise using a Gaussian filter proposed by Sundaram et al to arrive at the invention of the present application. Regarding claim 14, Rasheed et al in view of Tan et al, further in view of Sudararam et al teach The system of claim 12, (as previously described). Sundararam et al also disclose removing background noise via a second filtering algorithm if the extracted seal is in gray scale. More specifically, Sundararam et al teach wherein the background noise is using a second filtering algorithm if the extracted seal is in gray scale (Sundararam et al: image pre-processor 1214 in Fig. 12 can apply a Gaussian noise filter on candidate images to reduce noise [pg. 90; ln. 23-24 & pg. 91; ln. 1-11]). Thus, in accordance with KSR rationales (see MPEP § 2143), the prior art includes all of the claimed elements in the present application, with the only difference being the lack of combination. Furthermore, one of ordinary skill in the art could have easily combined the elements by known methods and that each element would merely perform the same function as it does separately. For example, the inclusion of a Gaussian filter to remove background noise would provide for the extraction of a seal that would readily reduce noise and improve the overall accuracy. Finally, one of ordinary skill in the art would recognize that the result of the combination would be predictable, since the use of Gaussian filtering would merely reduce the background noise resulting in an improved process. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take the method of seal authentication proposed by Rasheed et al in view of Tan et al and implement the reduction of background noise for a gray scale seal using a Gaussian filter proposed by Sundaram et al to arrive at the invention of the present application. Regarding claim 20, Rasheed et al in view of Tan et al teach The non-transitory computer-storage medium of claim 17 (as previously described), but does not disclose preprocessing to remove background noise from the extracted seal. Sundararam et al, however, are analogous art in the same field of endeavor as the present application and also disclose a method authenticating a seal by preprocessing to remove background noise from the extracted seal. More specifically, Sundararam et al teach wherein the authenticating further comprises: preprocessing the extracted seal to remove background noise from the extracted seal (Sundararam et al: image pre-processor 1214 in Fig. 12 can apply a Gaussian noise filter on candidate images to reduce noise [pg. 90; ln. 23-24 & pg. 91; ln. 1-11]). Thus, the prior art includes all of the claimed elements in the present application, with the only difference being the lack of combination. Furthermore, one of ordinary skill in the art could have easily combined the elements by known methods and that each element would merely perform the same function as it does separately. For example, the inclusion of a Gaussian filter to remove background noise would provide for the extraction of a seal that would readily reduce noise and improve the overall accuracy. Finally, in accordance with KSR rationales (see MPEP § 2143), one of ordinary skill in the art would recognize that the result of the combination would be predictable, since the use of Gaussian filtering would merely reduce the background noise resulting in an improved process. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take the method of seal authentication proposed by Rasheed et al in view of Tan et al and implement the reduction of background noise using a Gaussian filter proposed by Sundaram et al to arrive at the invention of the present application. Conclusion The prior art made of record and not relied upon is considered pertinent to the applicant’s disclosure: Wang, Jia-Lu (CN 115810196 A) disclose a seal identification method and device that extracts seal features using SIFT. Kim et al (US 11861816 B2) disclose a system and method for detecting forged stamps utilizing a convolutional neural network. Huang et al (CN 114663867 A) disclose a method and device for identifying a stamp or seal using SIFT keypoint matching. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael M. Sofroniou whose telephone number is (571)272-0287. The examiner can normally be reached M-F: 7:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John M. Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL M SOFRONIOU/Examiner, Art Unit 2661 /JOHN VILLECCO/Supervisory Patent Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Dec 12, 2023
Application Filed
Jan 29, 2026
Non-Final Rejection — §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month