Prosecution Insights
Last updated: April 19, 2026
Application No. 18/195,986

METHODS AND SYSTEMS FOR DATA FILTERING

Non-Final OA §103
Filed
May 11, 2023
Examiner
ALLEN, BRITTANY N
Art Unit
2169
Tech Center
2100 — Computer Architecture & Software
Assignee
BANK OF AMERICA CORPORATION
OA Round
5 (Non-Final)
42%
Grant Probability
Moderate
5-6
OA Rounds
4y 8m
To Grant
79%
With Interview

Examiner Intelligence

Grants 42% of resolved cases
42%
Career Allow Rate
163 granted / 391 resolved
-13.3% vs TC avg
Strong +38% interview lift
Without
With
+37.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
31 currently pending
Career history
422
Total Applications
across all art units

Statute-Specific Performance

§101
17.5%
-22.5% vs TC avg
§103
52.8%
+12.8% vs TC avg
§102
12.3%
-27.7% vs TC avg
§112
13.6%
-26.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 391 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/17/25 has been entered. Remarks The request for continuation was received on 10/17/25. Claims 1 and 22-27 are pending in the application. Claims 2-21 have been cancelled. Applicant’s arguments have been carefully and respectfully considered by the examiner. Claim(s) 1 and 22-27 are rejected under 35 U.S.C. 103 as being unpatentable over Brenner et al. (US 2021/0303515), and further in view of Jones et al. (US 11,494,515) and Liao et al. (US 10,803,197). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 22-27 are rejected under 35 U.S.C. 103 as being unpatentable over Brenner et al. (US 2021/0303515), and further in view of Jones et al. (US 11,494,515) and Liao et al. (US 10,803,197). With respect to claim 1, Brenner teaches a method of optimizing channel utilization and network memory capacity when transferring a dataset from an edge layer into a platform layer of a network (Brenner, pa 0020, The network server computers are coupled directly or indirectly to the data storage 114, target VMs 104, and the data sources and other resources through network 110, which is typically a public cloud network), the method comprising: implementing the one or more data rules for designating data points when using said computer processor to run a front-end filter disposed in an edge layer of said network (Brenner, pa 0019, a storage server 102 executes a data storage or backup management process 112 that coordinates or manages the backup of data from one or more data sources 108 to storage devices) and one or more non-transitory computer-readable media storing computer-executable instructions, wherein the instructions, when executed by the computer processor running the front-end filter, automatically analyze said dataset, the method comprising the steps of: said edge layer of said network automatically receiving said dataset; wherein: said dataset is a first data set (Brenner, pa 0032, Process 400 begins with the backup process 402 starting a backup request 406 in response to a user or system command that triggers a backup operation, such as a routine scheduled backup. The backup process then scans the file system. At this point, the data labeling process 404 determines whether or not a data label already exists for the data to be backed up (the saveset), 410.); automatically analyzing, using said computer processor, said first dataset by running said front end filter using the one or more data rules created by the machine learning engine (Brenner, pa 0032, The backup process then scans the file system. At this point, the data labeling process 404 determines whether or not a data label already exists for the data to be backed up (the saveset), 410. If a label exists, the existing label is used, 418, as described above. If a previous label does not exist, it is created, 412. To create the label, the process in step 414 performs the sub-steps of performing a full content match and/or a filename match, and then looks up the matches in a database.); and automatically identifying, using said computer processor to run said front-end filter using the one or more data rules created by the machine learning engine, prioritized data points and deprioritized data points in said first dataset based on predetermined criteria (Brenner, pa 0033, The saveset is then sent, 420, along with any other data to the target storage media by the backup process 402. For this embodiment, the output of the data label process 416 is fed back to the backup software so that it can first record the label and apply the appropriate rule to the file being backed up based on that label); wherein: said network is managed by a commercial entity (Brenner, pa 0002, intended for data of differing organizations); said prioritized data points need to be transferred to said platform layer (Brenner, Fig. 6, data label with rules and cloud tier storage constraints. Examiner note: certain labels such as “Project XYZ” are cloud stored); said deprioritized data points need not be transferred to said platform layer (Brenner, Fig. 6, data label with rules and cloud tier storage constraints. Examiner note: certain labels such as “Highly Restrictive” are disabled from being cloud stored & pa 0038, For example, presume the backup system identified or discovered files that had a data label of Highly Restrictive. The backup software would communicate with the DLRE. While the backup operation is in progress, and the DLRE would respond with a rule such as: for all highly restrictive files, those files must be retained forever (never deleted) and cannot be stored on publicly accessible storage, such as a Cloud tier. It is up to the backup software to enforce and follow this rule.); said prioritized data points comprise data points that are relevant to said commercial entity (Brenner, pa 0026, For each known data type on which the process performs full content indexing, it can look for different data characteristics that match patterns supplied by the user. These patterns can be well known patterns such as personal identification information (PII) patterns ( e.g., Social Security Numbers, phone numbers, addresses, etc.). Alternatively, patterns can be supplied by the user to match their use cases (financial code, algorithms specific to their company, and so on). Characteristics can thus be defined along various different factors, including but not limited to: file type, access, source, age, application, importance, size, and so on.); said deprioritized data points comprise data points that comprise personal data fields that said commercial entity does not require to be stored and that are not saved in the edge layer (Brenner, pa 0041, The rules corresponding to different data labels can be any appropriate rule dictating storage, access, transmission, or other process associated with the saveset data. The rules may be user defined or provided by the system, or a combination of both. FIG. 6 is an example table of rules & pa 0039, data can be deleted and can be cloud tiered.); implementing, using said computer processor, the one or more rules for designating metadata tags when using said computer processor to automatically run said front-end filter assigning a first metadata tag to the prioritized data points and a second metadata tag to the deprioritized data points (Brenner, pa 0030, As part of the backup process, the backup software 112 will apply the rules described in this table to each file and associate the named data label to each file.). automatically subtracting, using said computer processor, said deprioritized data points from said first dataset by identifying data points that have the second metadata tag, said computer processor generating a trimmed dataset that consists of data points that have the first metadata tag (Brenner, pa 0033, the output of the data label process 416 is fed back to the backup software so that it can first record the label and apply the appropriate rule to the file being backed up based on that label, and second, the file can be sent to the storage target based on the rules applied. In general, no data is sent to the storage target before the rules are applied. & pa 0044, For each rule and file, it will follow the rules based on the label in the table, 708. This workflow is done for each file and will continue till all files are processed.) automatically transferring, using said computer processor, said trimmed data set to said platform layer (Brenner, pa 0033, The saveset is then sent, 420, along with any other data to the target storage media by the backup process 402… no data is sent to the storage target before the rules are applied. & pa 0042, default data has a rule that is unrestricted, and thus this data can be stored locally or cloud tiered, and can be replicated and retained at will. Other types of data, such as highly restricted data, is subject to stricter rules, such as it cannot be cloud tiered, and it cannot be deleted, i.e., retained forever.); automatically monitoring, using said computer processor, said edge layer of said network, said monitoring determining when a second data set is received by the network (Brenner, pa 0032, Process 400 begins with the backup process 402 starting a backup request 406 in response to a user or system command that triggers a backup operation, such as a routine scheduled backup. Examiner note: new backups can be triggered to create further backup operation); and automatically processing, using said computer processor, the second data set as the first data set was processed (Brenner, pa 0031, In certain cases, files or datasets may already be labeled, such as by an application, system administrator or upstream rules engine. Thus, as shown in FIG. 2, process 200 may discover and use existing data labels, 204. Examiner note: this indicates that datasets may already have been processed). Brenner doesn't expressly discuss utilizing a computer processor to run a machine learning engine to create one or more data rules for designating data points; and said computer processor: running the machine learning engine to create one or more data rules for designating metadata tags; and wherein said edge layer is a demilitarized zone (DMZ) of the network; automatically subtracting, using said computer processor, said deprioritized data points from said first dataset by identifying data points that have the second metadata tag, said computer processor generating a trimmed dataset that consists of data points that have the first metadata tag; automatically replacing, using said computer processor, the first data set in the edge layer with said trimmed data set. Jones teaches utilizing a computer processor to run a machine learning engine to create one or more data rules for designating data points (Jones, Col. 16 Li. 12-17, the machine-learning classification model may generate a feature representation (e.g., a feature vector) having components associated with the different types of target data. Each of the components may provide a prediction (e.g., prediction value) with respect to the corresponding type of target data applying to the particular data element); and implementing the one or more data rules for designating data points (Jones, Col. 16 Li. 12-17, the machine-learning classification model may generate a feature representation (e.g., a feature vector) having components associated with the different types of target data. Each of the components may provide a prediction (e.g., prediction value) with respect to the corresponding type of target data applying to the particular data element) when using said computer processor to run a front-end filter disposed in an edge layer of said network (Jones, Fig. 1, Proxy Computing System 120 with anonymizer module 125); said edge layer of said network automatically receiving said dataset; wherein said edge layer is a demilitarized zone (DMZ) of the network; and said data set is a first data set (Jones, Col. 10 Li. 37-42, To help combat this challenge, a proxy computing system 120 is provided according to various aspects that intercepts the results (source dataset) of the mapping of the target data for the one or more data sources 137 and anonymizes the data samples for various data elements before providing the results for review.); running the machine learning engine to create one or more data rules for designating metadata tags; and implementing the one or more rules for designating metadata tags (Jones, Col. 16 Li. 21-23, At Operation 520, the classification module 135 labels the data element based on the predictions generated by the machine-learning classification model. According to various aspects, the classification module 135 performs this particular operation by determining whether the prediction generated for a particular type of target data satisfies a threshold.); automatically subtracting, using said computer processor, said deprioritized data points from said first dataset by identifying data points that have the second metadata tag, said computer processor generating a trimmed dataset that consists of data points that have the first metadata tag (Jones, Col. 11 Li. 23-29, intermingling the supplemental anonymizing data samples with the data samples gathered for a data element having real occurrences of the type of target data identified by the label anonymizes the data samples for the data element since the data samples can no longer be easily associated with a real data subject based on, for example, other proximate data samples in the review dataset); automatically replacing, using said computer processor, the first data set in the edge layer with said trimmed data set (Jones, Col. 13 Li. 28-31, At Operation 325, the anonymizer module 125 generates a review dataset comprising the supplemental anonymizing data samples intermingled with the data samples for each of the one or more data elements). It would have been obvious at the effective filing date of the invention to a person having ordinary skill in the art to which said subject matter pertains to have modified Brenner with the teachings of Jones because it obfuscates sensitive information (Jones, Col. 6 Li. 8-12) while efficiently labeling data with machine learning (Jones, Col. 9 Li. 17-19). Brenner in view of Jones doesn't expressly discuss automatically replacing, using said computer processor, the first data set in the edge layer with said trimmed data set, said trimmed dataset taking up less memory than the first dataset. Liao teaches automatically subtracting, using said computer processor, said deprioritized data points from said first dataset by identifying data points that have the second metadata tag, said computer processor generating a trimmed dataset that consists of data points that have the first metadata tag (Liao, Col. 24 Li. 8-22, Corresponding to individual ones of the filtered access requests, a respective log record (LR) 1491 (e.g., 1491A, 1491B, 1491C, 1491D) may be stored in a log 1435 by the log record generator 1430. As shown, in various embodiments, at least some of the log records (such as 1491A, 1491B, and 1491D) may comprise obfuscated portions 1492, corresponding to data that was substituted or redacted… instead of generating substitutes, at least some tokens may be removed entirely from the versions of the filtering requirements that are stored in the log 1435); automatically replacing, using said computer processor, the first data set in the edge layer with said trimmed data set, said trimmed dataset taking up less memory than the first dataset (Liao, Col. 24 Li. 8-22, Corresponding to individual ones of the filtered access requests, a respective log record (LR) 1491 (e.g., 1491A, 1491B, 1491C, 1491D) may be stored in a log 1435 by the log record generator 1430.); wherein optimizing network memory capacity is achieved by replacing the first dataset with the trimmed dataset (Liao, Col. 9 Li. 27-35, the sizes of individual UDIs 122 may be quite large--e.g., items that are petabytes in size may be supported by the OSS 102. Only a small subset of the contents of a given UDI may be needed for a particular client application; the client may therefore use filtered accesses 178 to reduce the amount of data that has to be transferred to the destination computing devices at which the client application is to be run, and to reduce the amount of memory/storage required at the destination devices.) automatically transferring, using said computer processor, said trimmed data set to said platform layer; wherein: transferring the trimmed data set uses less bandwidth than transferring the first data set due to the trimmed data set taking up less memory than the first data set (Liao, Col. 9 Li. 27-35, the sizes of individual UDIs 122 may be quite large--e.g., items that are petabytes in size may be supported by the OSS 102. Only a small subset of the contents of a given UDI may be needed for a particular client application; the client may therefore use filtered accesses 178 to reduce the amount of data that has to be transferred to the destination computing devices at which the client application is to be run, and to reduce the amount of memory/storage required at the destination devices. & Col. 30 Li. 28-35, If a client is able to specify interpretation rules for the unstructured data items, and succinctly indicate filtering criteria ( e.g., using languages similar to SQL, regular expressions or the like) that can be tested to identify subsets of the stored data that are to be retrieved, considerable savings in network bandwidth, memory and processing at client-side computing devices may be achieved); and transferring the trimmed data set instead of the first data set (Liao, Col. 11 Li. 41-46, The output of the server-side filtering and transformation operations may, for example, comprise just 0.2 gigabytes of data. The transfer 230B of the filtered and transformed data may therefore require far less network bandwidth than transfer 230A.) from the edge layer to the platform layer, said transferring optimizing channel utilization (Liao, Col. 9 Li. 27-35, the sizes of individual UDIs 122 may be quite large--e.g., items that are petabytes in size may be supported by the OSS 102. Only a small subset of the contents of a given UDI may be needed for a particular client application; the client may therefore use filtered accesses 178 to reduce the amount of data that has to be transferred to the destination computing devices at which the client application is to be run, and to reduce the amount of memory/storage required at the destination devices. & Col. 30 Li. 28-35, If a client is able to specify interpretation rules for the unstructured data items, and succinctly indicate filtering criteria ( e.g., using languages similar to SQL, regular expressions or the like) that can be tested to identify subsets of the stored data that are to be retrieved, considerable savings in network bandwidth, memory and processing at client-side computing devices may be achieved). Liao is not directed towards transferring data from the edge layer to the platform layer, however, transferring trimmed data to any destination achieves the goals of reducing memory on the destination device and reducing necessary bandwidth. It would have been obvious at the effective filing date of the invention to a person having ordinary skill in the art to which said subject matter pertains to have modified Brenner in view of Jones with the teachings of Liao because it reduces the amount of data that has to be transferred to the destination computing devices and reduces the amount of memory/storage required at the destination devices (Liao, Col. 4 Li. 5-27). With respect to claim 22, Brenner in view of Jones and Liao teaches the method of claim 1 wherein optimizing network memory capacity comprises reducing storage demand on available hardware storage capacity by replacing the first dataset with the trimmed dataset (Liao, Col. 24 Li. 4-22, A log record generator 1430 may obtain the output, comprising for example a modified version of the original filtering criteria/query 1410, or a modified version of the transformed representation 1420, generated by the obfuscation subsystem 1425 in the depicted embodiment. Corresponding to individual ones of the filtered access requests, a respective log record (LR) 1491 (e.g., 1491A, 1491B, 1491C, 1491D) may be stored in a log 1435 by the log record generator 1430. As shown, in various embodiments, at least some of the log records (such as 1491A, 1491B, and 1491D) may comprise obfuscated portions 1492, corresponding to data that was substituted or redacted. Not all the log records may necessarily comprise obfuscated/substituted portions in some embodiments---e.g., some filtering requirements may not contain any sensitive data, so no substitutions or redactions may be required. In some embodiments, instead of generating substitutes, at least some tokens may be removed entirely from the versions of the filtering requirements that are stored in the log 1435.).. With respect to claim 23, Brenner in view of Jones and Liao teaches the method of claim 1 wherein optimizing network memory capacity comprises reducing storage demand on available software storage capacity by replacing the first dataset with the trimmed dataset (Liao, Col. 24 Li. 4-22, A log record generator 1430 may obtain the output, comprising for example a modified version of the original filtering criteria/query 1410, or a modified version of the transformed representation 1420, generated by the obfuscation subsystem 1425 in the depicted embodiment. Corresponding to individual ones of the filtered access requests, a respective log record (LR) 1491 (e.g., 1491A, 1491B, 1491C, 1491D) may be stored in a log 1435 by the log record generator 1430. As shown, in various embodiments, at least some of the log records (such as 1491A, 1491B, and 1491D) may comprise obfuscated portions 1492, corresponding to data that was substituted or redacted. Not all the log records may necessarily comprise obfuscated/substituted portions in some embodiments---e.g., some filtering requirements may not contain any sensitive data, so no substitutions or redactions may be required. In some embodiments, instead of generating substitutes, at least some tokens may be removed entirely from the versions of the filtering requirements that are stored in the log 1435.).. With respect to claim 24, Brenner in view of Jones and Liao teaches the method of claim 1 wherein optimizing network memory capacity comprises reducing storage demand on available hardware storage capacity and software storage capacity by replacing the first dataset with the trimmed dataset (Liao, Col. 24 Li. 4-22, A log record generator 1430 may obtain the output, comprising for example a modified version of the original filtering criteria/query 1410, or a modified version of the transformed representation 1420, generated by the obfuscation subsystem 1425 in the depicted embodiment. Corresponding to individual ones of the filtered access requests, a respective log record (LR) 1491 (e.g., 1491A, 1491B, 1491C, 1491D) may be stored in a log 1435 by the log record generator 1430. As shown, in various embodiments, at least some of the log records (such as 1491A, 1491B, and 1491D) may comprise obfuscated portions 1492, corresponding to data that was substituted or redacted. Not all the log records may necessarily comprise obfuscated/substituted portions in some embodiments---e.g., some filtering requirements may not contain any sensitive data, so no substitutions or redactions may be required. In some embodiments, instead of generating substitutes, at least some tokens may be removed entirely from the versions of the filtering requirements that are stored in the log 1435.). With respect to claim 25, Brenner in view of Jones and Liao teaches the method of claim 1 wherein optimizing network memory capacity comprises optimizing network memory capacity at the edge layer (Liao, Col. 9 Li. 27-35, the sizes of individual UDIs 122 may be quite large--e.g., items that are petabytes in size may be supported by the OSS 102. Only a small subset of the contents of a given UDI may be needed for a particular client application; the client may therefore use filtered accesses 178 to reduce the amount of data that has to be transferred to the destination computing devices at which the client application is to be run, and to reduce the amount of memory/storage required at the destination devices. & Col. 30 Li. 28-35, If a client is able to specify interpretation rules for the unstructured data items, and succinctly indicate filtering criteria ( e.g., using languages similar to SQL, regular expressions or the like) that can be tested to identify subsets of the stored data that are to be retrieved, considerable savings in network bandwidth, memory and processing at client-side computing devices may be achieved). With respect to claim 26, Brenner in view of Jones and Liao teaches the method of claim 1 wherein optimizing network memory capacity comprises optimizing network memory capacity at the platform layer (Brenner, Fig. 6 & pa 0041-0042, storage rules for storing or not storing data at the cloud). With respect to claim 27, Brenner in view of Jones and Liao teaches the method of claim 1 wherein optimizing network memory capacity comprises optimizing network memory capacity at the edge layer (Liao, Col. 9 Li. 27-35, the sizes of individual UDIs 122 may be quite large--e.g., items that are petabytes in size may be supported by the OSS 102. Only a small subset of the contents of a given UDI may be needed for a particular client application; the client may therefore use filtered accesses 178 to reduce the amount of data that has to be transferred to the destination computing devices at which the client application is to be run, and to reduce the amount of memory/storage required at the destination devices. & Col. 30 Li. 28-35, If a client is able to specify interpretation rules for the unstructured data items, and succinctly indicate filtering criteria ( e.g., using languages similar to SQL, regular expressions or the like) that can be tested to identify subsets of the stored data that are to be retrieved, considerable savings in network bandwidth, memory and processing at client-side computing devices may be achieved) and at the platform layer (Brenner, Fig. 6 & pa 0041-0042, storage rules for storing or not storing data at the cloud). Response to Arguments 35 U.S.C. 103 Applicant argues that Brenner fails to teach subtracting personal data fields from a dataset as well as not storing the personal data set at the edge layer because the specific example shown in Fig. 6 of Brenner stores personal identification information locally and “forever.” While the specific example described in Fig. 6 does not exactly align with the claim language, the “personal data fields” language is nonfunctional descriptive material. Additionally, Brenner shows that rules specify that data can be made to be deleted and can be cloud tiered (Brenner, pa 0039). This provides data that is not saved in the edge layer. Embodiments of the data labeling process 120 allow system administrators or users to define, on a per data label basis, how that data should be protected, in conjunction with the traditional policy definitions and backup workflows (Brenner, pa 0024). This allows for rules to be made that delete data that the commercial entity does not require to be stored. Throughout Brenner it is clear that the data labels and rules associated with the labels are user-defined. The references do not specify “said deprioritized data points comprise data points that comprise personal data fields that said commercial entity does not require to be stored and that are not saved in the edge layer”, however, it would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains to have modified Brenner in view of Jones and Liao to have used the specific data storage policy described in the claims because the method has the aforementioned benefits and the benefits would apply regardless of the type of data. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRITTANY N ALLEN whose telephone number is (571)270-3566. The examiner can normally be reached M-F 9 am - 5:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sherief Badawi can be reached on 571-272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRITTANY N ALLEN/ Primary Examiner, Art Unit 2169
Read full office action

Prosecution Timeline

May 11, 2023
Application Filed
Jun 11, 2024
Non-Final Rejection — §103
Sep 23, 2024
Response Filed
Nov 15, 2024
Final Rejection — §103
Feb 18, 2025
Request for Continued Examination
Feb 25, 2025
Response after Non-Final Action
Mar 07, 2025
Non-Final Rejection — §103
Mar 18, 2025
Interview Requested
Mar 31, 2025
Examiner Interview Summary
Mar 31, 2025
Applicant Interview (Telephonic)
Apr 24, 2025
Response Filed
Jun 16, 2025
Final Rejection — §103
Sep 11, 2025
Interview Requested
Sep 18, 2025
Examiner Interview Summary
Sep 18, 2025
Applicant Interview (Telephonic)
Oct 17, 2025
Request for Continued Examination
Oct 22, 2025
Response after Non-Final Action
Nov 14, 2025
Non-Final Rejection — §103
Jan 21, 2026
Interview Requested
Jan 29, 2026
Examiner Interview Summary
Jan 29, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585707
SYSTEMS AND METHODS FOR DOCUMENT ANALYSIS TO PRODUCE, CONSUME AND ANALYZE CONTENT-BY-EXAMPLE LOGS FOR DOCUMENTS
2y 5m to grant Granted Mar 24, 2026
Patent 12561342
MULTI-REGION DATABASE SYSTEMS AND METHODS
2y 5m to grant Granted Feb 24, 2026
Patent 12530391
Digital Duplicate
2y 5m to grant Granted Jan 20, 2026
Patent 12524389
ENTERPRISE ENGINEERING AND CONFIGURATION FRAMEWORK FOR ADVANCED PROCESS CONTROL AND MONITORING SYSTEMS
2y 5m to grant Granted Jan 13, 2026
Patent 12524475
CONCEPTUAL CALCULATOR SYSTEM AND METHOD
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
42%
Grant Probability
79%
With Interview (+37.7%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 391 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month