Prosecution Insights
Last updated: April 19, 2026
Application No. 17/854,352

AI-AUGMENTED AUDITING PLATFORM INCLUDING TECHNIQUES FOR PROVIDING AI-EXPLAINABILITY FOR PROCESSING DATA THROUGH MULTIPLE LAYERS

Non-Final OA §103
Filed
Jun 30, 2022
Examiner
MOUNDI, ISHAN NMN
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
Pricewaterhousecoopers LLP
OA Round
3 (Non-Final)
12%
Grant Probability
At Risk
3-4
OA Rounds
4y 6m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 12% of cases
12%
Career Allow Rate
2 granted / 16 resolved
-42.5% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
41 currently pending
Career history
57
Total Applications
across all art units

Statute-Specific Performance

§101
37.7%
-2.3% vs TC avg
§103
45.0%
+5.0% vs TC avg
§102
9.7%
-30.3% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/12/2025 has been entered. Claims 1, 12, and 13 have been amended. Claims 1-13 remain pending in the application. The amendment filed 11/12/2025 is sufficient to overcome the 101 rejections of claims 1-13. The previous rejections have been withdrawn. The amendment filed 11/12/2025 is sufficient to overcome the 103 rejections of claims 1, 2, 8, and 10-13 over Olsher in view of Contryman, the 103 rejection of claim 3 over Olsher in view of Contryman and further in view of Li, and the 103 rejections of claims 4-7 and 9 over Olsher in view of Contryman and further in view of Lin. The previous rejections have been withdrawn. Argument 1, regarding the 101 rejections, applicant argues that the claims integrate the judicial exceptions into a practical application by improving the operation of a computing system by automatically updating the system based on user engagement. Applicant argues that this improvement is reflected in the amended independent claims. Examiner agrees and the 101 rejections are withdrawn. Argument 2, regarding the prior art rejections, applicant argues that Olsher in view of Contryman does not appear to explicitly teach “updating one or more logic rules that connect the input layer to the presentation layer”. Examiner notes this argument is moot in view of the rejections over Olsher in view of Contryman and Kishimoto et al (Pub. No.: US 11443212 B2), hereafter Kishimoto. Kishimoto teaches including automatically updating one or more logic rules that connect the input layer to the presentation layer (policy rules are derived from a Markov logic network, C12:L60-65. These policy rules dictate how input data is explained to a user via a user interface and may be iteratively updated using feedback collected by the user, claim 1. Under the broadest reasonable interpretation and in view of P0072 of the specification of the instant application, the user interface is interpreted as a presentation layer). The full prior art rejections are outlined below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 8, and 10-13 are rejected under 35 U.S.C. 103 as being unpatentable over Olsher et al (Pub. No.: WO 2016183229 A1), hereafter Olsher, in view of Contryman et al (Pub. No.: US 12056771 B1), hereafter Contryman and Kishimoto et al (Pub. No.: US 11443212 B2), hereafter Kishimoto. Regarding claims 1, 12, and 13, Olsher teaches a system, method, and medium for providing explainability for processing data through multiple data-processing layers, which comprise, at an input layer: receive an evidence data set comprising a plurality of evidence items (“The system including an input interface communicatively coupled to an input system for receiving input knowledge data, a task, and user input, and an output interface communicatively coupled to an output system for generating the controlled action output.”, P0061); apply one or more evidence processing models to the evidence data set to generate evidence understanding data (“a process for converting knowledge models and inputs into output (the reasoning procedure);”, P0129);… at a presentation layer: receive data, wherein the received data includes one of: the evidence understanding data, and data generated based on the evidence understanding data (“The system including an input interface communicatively coupled to an input system for receiving input knowledge data, a task, and user input, and an output interface communicatively coupled to an output system for generating the controlled action output.”, P0061); apply one or more presentation generation models to the received data to generate presentation data (system converts analysis data into “story form” for presentation to the human audience, P0459); Olsher does not appear to explicitly teach …and generate input-layer explainability data, wherein the input-layer explainability data represents information about the processing of the evidence data set by the input layer;… and generate presentation-layer explainability data, wherein the presentation-layer explainability data represents information about the processing of the received data by the input layer; cause display of the presentation data; and cause display of one or more of: the input-layer explainability data and the presentation- layer explainability data. Contryman teaches …and generate input-layer explainability data, wherein the input-layer explainability data represents information about the processing of the evidence data set by the input layer (“Decision output builder 236 may then generate a decision result… decision includes the explainability data discussed herein (e.g.,…, such as a decision explainability graph, indicating how a decision was reached for a given input…)”, C13:L24-38);… and generate presentation-layer explainability data, wherein the presentation-layer explainability data represents information about the processing of the received data by the input layer; cause display of the presentation data (decision output builder may generate a decision based on user input data, the decision, C13:L24-38); and cause display of one or more of: the input-layer explainability data and the presentation- layer explainability data (decision is transmitted to user interface to be presented to the user, C7:L27-31) monitor user interaction with the displayed input-layer explainability data and/or the displayed presentation-layer explainability data to detect a portion of the displayed input-layer explainability data and/or the displayed presentation-layer explainability data that the user selects via an interactive interface (User may provide a response after being presented with data on a user interface, and this response is stored in memory, C11:L62-67. The data presented on the user interface is explainability data that is generated by the ML digitization engine 222, C10:L54-58, C11:L8-15, C11:L52-62); and automatically update a configuration of at least one of the input layer and the presentation layer based on the user interaction (“the message and/or user interface enables a user to provide a response that updates, changes, and/or confirms the key, value, or key, value pairing. In embodiments, the updates, changes, and/or confirmations are stored in memory 260 along with the document image 204 as training data for future retraining of the ML digitization engine 222”, C11:L62-67). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Olsher and Contryman before them, to include Contryman’s specific teachings of generating explainability data and decision data based on user input, transmitting this data to a user interface, tracking user interactions and updating ML models based on the interactions in Olsher’s system of Universal Task Independent Simulation and Control for Generating Controlled Actions Using Nuanced Artificial Intelligence. One would have been motivated to make such a combination of generating explainability data and decision data based on user input and transmitting this data to a user interface (see Contryman C13:L24-38, C7:L27-31) and receiving input knowledge data and an output interface to process decisions (see Olsher P0061). Olsher in view of Contryman does not appear to explicitly teach “including automatically updating one or more logic rules that connect the input layer to the presentation layer”. Kishimoto teaches including automatically updating one or more logic rules that connect the input layer to the presentation layer (policy rules are derived from a Markov logic network, C12:L60-65. These policy rules dictate how input data is explained to a user via a user interface and may be iteratively updated using feedback collected by the user, claim 1. Under the broadest reasonable interpretation and in view of P0072 of the specification of the instant application, the user interface is interpreted as a presentation layer). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Olsher, Contryman, and Kishimoto before them, to include Kishimoto’s specific teachings of updating policy rules that dictate how input data is explained to a user via a user interface in Olsher’s system of Universal Task Independent Simulation and Control for Generating Controlled Actions Using Nuanced Artificial Intelligence. One would have been motivated to make such a combination of updating policy rules that dictate how input data is explained to a user via a user interface (see Kishimoto claim 1) and receiving input knowledge data and an output interface to process decisions (see Olsher P0061). Regarding claim 2, Olsher in view of Contryman and Kishimoto teaches the limitations of claim 1 as outlined above. Olsher further teaches receive the evidence understanding data generated by the input layer (“system including an input interface communicatively coupled to an input system for receiving input knowledge data, a task, and user input, and an output interface communicatively coupled to an output system for generating the controlled action output.”, P0061); apply one or more intermediate-layer processing models to the evidence understanding data to generate the data received by the presentation layer; provide the data received by the presentation layer to the presentation layer; (“a process for converting knowledge models and inputs into output (the reasoning procedure); a post-processing step involving intermediate or said final results” , P0129). Contryman further teaches and generate intermediate-layer explainability data, wherein the intermediate-layer explainability data represents information about the processing of the evidence understanding data by the one or more intermediate layers (intermediate level decision and corresponding explainability data is generated based on the decision model, C22:L5-20). Regarding claim 8, Olsher in view of Contryman and Kishimoto teaches the limitations of claim 1 as outlined above. Olsher further teaches wherein the one or more processors are configured to initialize the presentation layer by applying one or more machine learning models to classify output data from one or more prior analyses performed by the system (concepts within output data are identified and after performing calculations are assigned to a different concept, P0119). Regarding claim 10, Olsher in view of Contryman and Kishimoto teaches the limitations of claim 1 as outlined above. Contryman further teaches wherein the one or more processors are configured to cause the system to: receive a user input comprising a selection of a portion of the displayed presentation data, wherein causing display of one or more of the input-layer explainability data and the presentation-layer explainability data is performed in accordance with the user input (explainability data, such as an explainability graph, is sent to a user who then inputs a decision corresponding to the explainability data, which is then stored and displayed in a different graphical interface, C13:L39-52). Regarding claim 11, Olsher in view of Contryman and Kishimoto teaches the limitations of claim 1 as outlined above. Contryman further teaches wherein the one or more processors are configured to cause the system to, at the presentation layer, select, based on the received data, the one or more presentation generation models from a superset of presentation generation models (“Based on the context, either form digitization engine 220 may select a specific set of ML models for use by the ML digitization engine 220 to extract key, value pairs form the document image 204”, C11:L43-46). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Olsher in view of Contryman and Kishimoto and further in view of Li et al (Pub. No.: CN 113518962 A), hereafter Li. Regarding claim 3, Olsher in view of Contryman and Kishimoto teaches the limitations of claim 1 as outlined above. Olsher does not appear to explicitly teach wherein the input layer and the presentation layer are each configured to apply a respective ontology. Li teaches wherein the input layer and the presentation layer are each configured to apply a respective ontology (different layers are directed by their own corresponding ontology, page 7, paragraph 9). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Olsher, Contryman, Kishimoto, and Li before them, to include Li’s specific teachings of directing different layers by their own corresponding ontology in Olsher’s system of Universal Task Independent Simulation and Control for Generating Controlled Actions Using Nuanced Artificial Intelligence. One would have been motivated to make such a combination of directing different layers by their own corresponding ontology (see Li page 7, paragraph 9) and organizing networks by their “semantic atom” before placing them into a graph network (see Olsher P00243). Claims 4-7 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Olsher in view of Contryman and Kishimoto and further in view of Lin et al (Pub. No.: US 20220222049 A1), hereafter Lin. Regarding claim 4, Olsher in view of Contryman and Kishimoto teaches the limitations of claim 1 as outlined above. Olsher does not appear to explicitly teach wherein the one or more processors are configured to cause the system to: receive a user input comprising an instruction to modify the input layer; and modify the input layer in accordance with the user input without modifying the presentation layer. Lin teaches wherein the one or more processors are configured to cause the system to: receive a user input comprising an instruction to modify the input layer; and modify the input layer in accordance with the user input without modifying the presentation layer (user input is used to modify the input layer. Representation layer is not listed as being affected, P0028). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Olsher, Contryman, Kishimoto, and Lin before them, to include Lin’s specific teachings of using user input to modify the input layer while not affecting the representation layer in Olsher’s system of Universal Task Independent Simulation and Control for Generating Controlled Actions Using Nuanced Artificial Intelligence. One would have been motivated to make such a combination of using user input to modify the input layer while not affecting the representation layer (see Lin P0028) and allowing the user to adjust their input in the Deep MindMaps for analysis and creating outputs for specific tasks (see Olsher P0053-P0055, P00648). Regarding claim 5, Olsher in view of Contryman and Kishimoto teaches the limitations of claim 1 as outlined above. Olsher does not appear to explicitly teach wherein the one or more processors are configured to: receive a user input comprising an instruction to modify the presentation layer; and modify the presentation layer in accordance with the user input without modifying the input layer. Lin teaches wherein the one or more processors are configured to: receive a user input comprising an instruction to modify the presentation layer; and modify the presentation layer in accordance with the user input without modifying the input layer (user input is used to modify the visual representation layer. Input is not listed as being affected, P0019). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Olsher, Contryman, Kishimoto, and Lin before them, to include Lin’s specific teachings of using user input to modify the representation layer while not affecting the input layer in Olsher’s system of Universal Task Independent Simulation and Control for Generating Controlled Actions Using Nuanced Artificial Intelligence. One would have been motivated to make such a combination of using user input to modify the representation layer while not affecting the input layer (see Lin P0019) and using INTELNET for changing representation of data based on emergency/unexpected scenarios, not based on user input (see Olsher, P00516). Regarding claim 6, Olsher in view of Contryman and Kishimoto and further in view of Lin teaches the limitations of claim 5 as outlined above. Lin further teaches wherein modifying the presentation layer comprises modifying the one or more presentation generation models while maintaining an input data format for the one or more presentation generation models and maintain an output data format for the one or more presentation generation models (visual model may be saved in any suitable form, P0021). Regarding claim 7, Olsher in view of Contryman and Kishimoto, and further in view of Lin teaches the limitations of claim 5 as outlined above. Lin further teaches wherein modifying the presentation layer comprises modifying one or more connections of the presentation layer to one or more other layers of the system (visual representation layer is modification includes modifying connections to other layers, P0019). Regarding claim 9, Olsher in view of Contryman and Kishimoto teaches the limitations of claim 1 as outlined above. Olsher does not appear to explicitly teach wherein the one or more processors are configured to cause the system to: receive utilization data representing a manner in which the presentation output is utilized by one or more users; and automatically modify the presentation layer in accordance with the utilization data. Lin teaches wherein the one or more processors are configured to cause the system to: receive utilization data representing a manner in which the presentation output is utilized by one or more users; and automatically modify the presentation layer in accordance with the utilization data (users may import their own data regarding the visual representation layer and modify the layer based on the uploaded data using model editor 204, P0030, P0032). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Olsher, Contryman, Kishimoto, and Lin before them, to include Lin’s specific teachings of using user input to modify the representation layer in Olsher’s system of Universal Task Independent Simulation and Control for Generating Controlled Actions Using Nuanced Artificial Intelligence. One would have been motivated to make such a combination of using user input to modify the representation layer (see Lin P0030, P0032) and allowing the user to adjust their input in the Deep MindMaps for analysis and creating outputs for specific tasks (see Olsher P0053-P0055, P00648). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ISHAN MOUNDI whose telephone number is (703)756-1547. The examiner can normally be reached 8:30 A.M. - 5 P.M.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Ell can be reached at (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /I.M./Examiner, Art Unit 2141 /MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Jun 30, 2022
Application Filed
Jan 29, 2025
Non-Final Rejection — §103
May 21, 2025
Examiner Interview Summary
May 21, 2025
Applicant Interview (Telephonic)
Jun 04, 2025
Response Filed
Aug 08, 2025
Final Rejection — §103
Nov 10, 2025
Applicant Interview (Telephonic)
Nov 10, 2025
Examiner Interview Summary
Nov 12, 2025
Request for Continued Examination
Nov 19, 2025
Response after Non-Final Action
Dec 10, 2025
Non-Final Rejection — §103
Mar 11, 2026
Applicant Interview (Telephonic)
Mar 11, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561970
METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR IMAGE RECOGNITION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
12%
Grant Probability
46%
With Interview (+33.3%)
4y 6m
Median Time to Grant
High
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month