Prosecution Insights
Last updated: April 19, 2026
Application No. 17/897,736

High-resolution Seismic Fault Detection with Adversarial Neural Networks and Regularization

Non-Final OA §101§103
Filed
Aug 29, 2022
Examiner
TSAI, JAMES T
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
Institute Of Geology And Geophysics Chinese Academy Of Sciences
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
184 granted / 297 resolved
+7.0% vs TC avg
Strong +56% interview lift
Without
With
+56.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
19 currently pending
Career history
316
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
57.5%
+17.5% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
12.0%
-28.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 297 resolved cases

Office Action

§101 §103
NON-FINAL REJECTION, FIRST DETAILED ACTION Status of Prosecution The present application 17/897,736, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The application was filed in the Office on Aug. 29, 2022 claims priority to Chinese application CN202111001525.3 with a filing date of Aug. 30, 2021. Claims 1-9 are pending and all are rejected in this rejection. Status of Claims Claims 1-9 are rejected under 35 U.S.C. § 101. Claims 2, 3 and 7 are objected to. Claims 1, 6 and 9 are rejected under 35 USC § 103 as being unpatentable over United States Patent Application Publication Jiang, US 2022/0351403 published on Nov. 3, 2022 in view of non-patent literature Alfarhan et al. (“Alfarhan”), “Robust Concurrent Detection of Salt Domes and Faults in Seismic Surveys Using an Improved UNet Architecture,” published on Nov. 8, 2020 in further view of Xiong et al. (“Xiong”), “Attention U-Net with Feature Fusion Module for Robust Defect Detection” published on Nov. 18, 2020. Claims 2-5 and 7-8 are rejected under 35 USC § 103 as being unpatentable over United States Patent Application Publication Jiang, US 2022/0351403 published on Nov. 3, 2022 in view of non-patent literature Alfarhan et al. (“Alfarhan”), “Robust Concurrent Detection of Salt Domes and Faults in Seismic Surveys Using an Improved UNet Architecture,” published on Nov. 8, 2020 in further view of Xiong et al. (“Xiong”), “Attention U-Net with Feature Fusion Module for Robust Defect Detection” published on Nov. 18, 2020 in further view of non-patent literature, Imade et al. (“Imade”), “Loss Function of GAN to Make a Clear Judgment,” published in 2021. Objection Claim 2 is objected for what appears to be typographical errors. The claim recites in part “an updating step: …. finishing the training till the discriminative difference value is less than a preset threshold value.” Examiner believes the claim element should instead read, “finishing the training if the discriminative difference value is less than a preset threshold value. Claim 7 is similarly objected to. Examination will proceed with this language. Correction is required. Claim 3 is objected to as well for what appears to be a typographical error. There is no antecedent basis for “degree of attention.” The claim is examined as reading, “performing local feature inversion on the predicted fault feature to obtain a degree of attention of the predicted fault feature.” Correction is required. Claim Interpretation – 35 USC § 112(f) This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: segmentation module, feature fusion module and discriminator module in claim 1. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections – 35 USC § 101, Subject Matter Eligibility 35 U.S.C. § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-9 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding representative claim 1, at step 1, the claim recites a computer-implemented method, and therefore is a process, which is a statutory category of invention. See MPEP § 2106.03. At step 2A, prong one, the claim recites a method for high-resolution seismic fault detection with an adversarial neural network. The limitation, “training a target adversarial neural network based on a preset training sample set to obtain a trained target adversarial neural network,” are processes or steps, that under a broadest reasonable interpretation, is a mathematical calculation or alternatively a mental process. See MPEP § 2106.04(a)(2)(I)(C), (III)(C). Therefore, the claim recites an abstract idea per this part of the analysis. At step 2A prong 2, the claim language is analyzed to determine whether it recites additional elements that integrate the judicial exception into a practical application. See MPEP § 2106.04(d). The limitations of “wherein the preset training sample set comprises seismic data and fault labels, the target adversarial neural network comprises: a segmentation module, a feature fusion module, and a discriminator module, the segmentation module is a module configured for obtaining a fault feature based on the preset training sample set, and the feature fusion module is a module configured for fusing the fault feature and the seismic data into a global feature map,” and, “performing seismic fault detection on a target seismic image based on the trained target adversarial neural network,” are steps or processes that, under its broadest reasonable interpretation, is an additional element that generally links the use of the judicial exception to a particular technological environment or field of use, specifically adversarial network training and prediction. See MPEP §§ 2106.04(d), 2106.05(h). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is therefore directed to an abstract idea. Next, at step 2B of the analysis, the claim is considered if it recites additional elements that amount to significantly more than the judicial exception. See MPEP § 2106.05. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to nothing more than linking the use of the judicial exception to a particular technological environment or field of use, specifically adversarial network training and prediction. See MPEP § 2106.05(h). Therefore, claim 1 is ineligible. As to dependent claims 2-5, the analysis of the parent claim is incorporated. The additional limitation are specific further limitations related to training, predicting or fusing steps which are also processes or steps, that under a broadest reasonable interpretation, is the abstract idea of a mathematical calculation. See MPEP § 2106.04(a)(2)(I)(C). The claim are also ineligible. The remaining claims 6-9 is substantively similar in analysis as above. The only differences, such as step 1 analysis of the statutory class are statutory (a machine or manufacture), but still fail in the rest of the analysis. These claims are also therefore ineligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. A. Claims 1, 6 and 9 are rejected under 35 USC § 103 as being unpatentable over United States Patent Application Publication Jiang, US 2022/0351403 published on Nov. 3, 2022 in view of non-patent literature Alfarhan et al. (“Alfarhan”), “Robust Concurrent Detection of Salt Domes and Faults in Seismic Surveys Using an Improved UNet Architecture,” published on Nov. 8, 2020 in further view of Xiong et al. (“Xiong”), “Attention U-Net with Feature Fusion Module for Robust Defect Detection” published on Nov. 18, 2020. As to Claim 1, Jiang teaches: A method for high-resolution seismic fault detection with an adversarial neural network, wherein the method comprises following steps of: training a target adversarial neural network based on a preset training sample set to obtain a trained target adversarial neural network, wherein the preset training sample set comprises seismic data and fault labels (Jiang: Fig. 3, par. 0118, the generative adversarial network (GAN) training system [301] can trained using true and downsampled synthetic fault data and (i.e. a present training sample set), the target adversarial neural network comprises: a discriminator module (Jiang: Fig. 3, [307] discriminator, par. 0029); and performing seismic fault detection on a target seismic image based on the trained target adversarial neural network (Jiang: par. 0026, the probability values in the fault detection data [206] may be displayed as a graphical heat map). PNG media_image1.png 722 1048 media_image1.png Greyscale Jiang may not explicitly teach: a segmentation module, the segmentation module is a module configured for obtaining a fault feature based on the preset training sample set. Alfarhan teaches in general concepts related to a semantic segmentation model for salt domes and fault identification using an encoder-decoder deep neural network (Alfarhan: Abstract). Specifically, Alfarhan discloses use of UNet variants for best detection accuracy (Alfarhan: III.A., “We propose to explore two UNet variants in order to find out which architecture has higher detection accuracy.”). UNet is the benchmark approach for segmentation tasks (Alfarhan: II.B.(2): “In this work, we employ the UNet model for concurrent detection of salt domes and faults in real seismic data. The UNet has become the benchmark approach for semantic segmentation which led us to select it for tackling our problem over other semantic segmentation deep learning based approaches.”). It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Jiang disclosures by including a UNet-based segmentation module for segmentation problems related to detection of faults in the training sample sets as taught and disclosed by Alfarhan. Such a person would have been motivated to do so with a reasonable expectation of success to because of UNet’s customizability to integrated into other architectures and simplicity (Alfarhan: II.B.(2)). Jiang and Alfahran may not explicitly teach: a feature fusion module and the feature fusion module is a module configured for fusing the fault feature and the seismic data into a global feature map. Xiong teaches in general concepts related to defect detection in industrial scenes by using UNet with feature fusion modules for combining multi-scale features to detect the defects in noisy images automatically (Xiong: Abstract). Feature fusion is a process wherein the input is a combination of basic feature maps and the up=sampled output from subsize feature maps with attention gates (Xiong: Section. 3.2., Fig. 2 [FFM]). The resulting output feature maps are concatenated with corresponding up-sampling feature maps in the decoder (i.e. a global feature map). PNG media_image2.png 595 623 media_image2.png Greyscale It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Jiang-Alfarhan disclosures by including a feature fusion module with the Alfarhan UNet as taught and disclosed by Xiong. Such a person would have been motivated to do so with a reasonable expectation of success to allow for attention to shallow layer shape and texture information that may be missed (Xiong: Sec. 3.2.1). As to Claim 6, it is rejected for similar reasons as claim 1. As to Claim 9, it is rejected for similar reasons as claim 1. Jiang further teaches a memory, a processor and computer program to perform the steps of claim 1 (Jiang: par. 0054). B. Claims 2-5 and 7-8 are rejected under 35 USC § 103 as being unpatentable over United States Patent Application Publication Jiang, US 2022/0351403 published on Nov. 3, 2022 in view of non-patent literature Alfarhan et al. (“Alfarhan”), “Robust Concurrent Detection of Salt Domes and Faults in Seismic Surveys Using an Improved UNet Architecture,” published on Nov. 8, 2020 in further view of Xiong et al. (“Xiong”), “Attention U-Net with Feature Fusion Module for Robust Defect Detection” published on Nov. 18, 2020 in further view of non-patent literature, Imade et al. (“Imade”), “Loss Function of GAN to Make a Clear Judgment,” published in 2021. As to Claim 2, Jiang, Alfahran and Xiong teaches the elements of claim 1. Jiang, Alfahran and Xiong as combined further teaches: wherein the step of training a target adversarial neural network based on a preset training set comprises: a first training step: training the segmentation module by utilizing the preset training sample set based on a balanced cross entropy loss function, so as to obtain a trained segmentation module (Alfahran: Sec. III.B.3, class imbalance with the use of a balanced cross entropy loss function); a predicting step: substituting the preset training sample set into the trained segmentation module to obtain a predicted fault feature (Examiner asserts that the preset training sample would be sent to the segmentation module to obtain the fault features as required for use in Xiong’s fusion module to accomplish the combination); a fusing step: fusing the seismic data and the predicted fault feature into a global feature map based on the feature fusion module (Xiong: Section. 3.2., Fig. 2 [FFM]). Jiang, Alfahran and Xiong may not explicitly teach: a second training step: training the discriminator module by utilizing the global feature map based on a categorical cross entropy loss function, so as to obtain a trained discriminator module; a discriminating step: substituting the global feature map into the trained discriminator module to obtain a discriminative difference value; and an updating step: updating the balanced cross entropy loss function based on the discriminative difference value and a regularization loss function, and repeating the steps from the first training step to the updating step, and finishing the training till the discriminative difference value is less than a preset threshold value. Imade teaches in general concepts related to a loss function that penalizes the discriminator of a GAN when it makes an ambiguous decision by use of a regularization term in the existing loss function which is a binary cross entropy function (Imade: Abstract). The loss function in equation 3 has a regularization term that is multiplied with a hyperparameter λ. The discriminator (i.e. a discriminative difference) is calculated as well and added to the weighted regularization term (Imade: Sec. II.C, Wasserstein GAN-gp). The goal of the GAN is to minimize the difference between two probability densities and in this case it is between the input data and the generated data, as measured by the cross entropy loss function, which would therefore a repetition of the training until this minimization is achieved (Imade: Sec. II.B discussion on JSD). It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Jiang-Alfarhan-Xiong disclosures by conducting a minimization of the GAN by employing a regularization term with the discriminator utilizing the global feature map as taught as suggested by Imade. Such a person would have been motivated to do so with a reasonable expectation of success to allow for better stability and consistency in the GAN (Imade: Abstract). As to Claim 3, Jiang, Alfahran, Xiong and Imade teach the elements of claim 2. Jiang further teaches: wherein the step of fusing the seismic data and the predicted fault feature into a global feature map based on the feature fusion module comprises: performing local feature inversion on the predicted fault feature to obtain a degree of attention of the predicted fault feature (Examiner asserts that the probability that the complement to the predicted fault feature is the degree of attention); and calculating the dot product of the degree of attention and the seismic data, and performing normalization processing of local contrast, so as to obtain the global feature map (Examiner asserts that the combination of the degree of attention and the seismic data along with a local contrast normalization would result in a global feature map as computed by Xiong). As to Claim 4, Jiang, Alfahran, Xiong and Imade teaches the elements of claim 3. Jiang further teaches: wherein the predicted fault feature comprises a probability of predicted fault and a fault label; the step of performing local feature inversion on the predicted fault feature to obtain the degree of attention of the predicted fault feature comprises: performing local feature inversion on the predicted fault feature by following equations: PNG media_image3.png 63 122 media_image3.png Greyscale where P is the probability of predicted fault, P-bar is the degree of attention corresponding to the probability of predicted fault, y is the fault label, and y-bar is the degree of attention corresponding to the fault label (Examiner notes that the probability that the complement to the predicted fault feature is the degree of attention and similarly for the fault label). As to Claim 5, Jiang, Alfahran, Xiong and Imade teach the elements of claim 4. Jiang further teaches: wherein the step of updating the balanced cross entropy loss function by utilizing a regularization loss function comprises: updating the balanced cross entropy loss function by following equation: PNG media_image4.png 32 303 media_image4.png Greyscale Where λ is a hyperparameter, PNG media_image5.png 28 86 media_image5.png Greyscale is the regularization loss function, PNG media_image6.png 25 85 media_image6.png Greyscale is the balanced cross entropy loss function, PNG media_image7.png 23 81 media_image7.png Greyscale is the discriminative difference value, and C is the output tensor of the discriminator module (Examiner refers the reader to the application of Imade’s regularization term in equations 3 and 4).. As to Claim 7, it is rejected for similar reasons as claim 2. As to Claim 8, it is rejected for similar reasons as claim 4. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Aspurur-Guzik et al., US Patent Application Publication 2020/0274554 (Aug. 27, 2020) (describing device-tailored error correction in quantum processors); Non-patent literature, Weinstein, et al., “Parameters of Psuedo-Random Quantum Circuits,” Physical Review A—Atomic, Molecular, and Optical Physics 78.5 (2008): 052332. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES T TSAI whose telephone number is (571)270-3916. The examiner can normally be reached M-F 8-5 Eastern. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached on 571-270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000./JAMES T TSAI/ /JAMES T TSAI/ Primary Examiner, Art Unit 2147
Read full office action

Prosecution Timeline

Aug 29, 2022
Application Filed
Jan 09, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585958
MMETHOD AND SYSTEM FOR TWO-STEP HIERARCHICAL MODEL OPTIMIZATION
2y 5m to grant Granted Mar 24, 2026
Patent 12577416
METHOD FOR GENERATING A COMPOSITION FOR DYES, PAINTS, PRINTING INKS, GRIND RESINS, PIGMENT CONCENTRATES OR OTHER COATING SUBSTANCES
2y 5m to grant Granted Mar 17, 2026
Patent 12579413
Method and Apparatus for Performing Convolution Neural Network Operations
2y 5m to grant Granted Mar 17, 2026
Patent 12566985
METHOD AND SYSTEM FOR PERFORMING DATA PREDICTION
2y 5m to grant Granted Mar 03, 2026
Patent 12561569
INFORMATION PROCESSING METHOD FOR REDUCSING STORAGE REQUIREMENTS FOR WEIGHT PARAMETER VALUES OF LEARNED DATA SETS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+56.0%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 297 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month