Prosecution Insights
Last updated: April 19, 2026
Application No. 18/244,971

HYBRID EXPLAINABLE ARTIFICIAL INTELLIGENCE SYSTEM

Non-Final OA §101§103
Filed
Sep 12, 2023
Examiner
KASSIM, HAFIZ A
Art Unit
3623
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
BANK OF AMERICA CORPORATION
OA Round
1 (Non-Final)
44%
Grant Probability
Moderate
1-2
OA Rounds
2y 11m
To Grant
98%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
148 granted / 338 resolved
-8.2% vs TC avg
Strong +54% interview lift
Without
With
+53.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
29 currently pending
Career history
367
Total Applications
across all art units

Statute-Specific Performance

§101
40.9%
+0.9% vs TC avg
§103
32.6%
-7.4% vs TC avg
§102
7.8%
-32.2% vs TC avg
§112
14.0%
-26.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 338 resolved cases

Office Action

§101 §103
DETAILED ACTION This is a non-final, first office action on the merits. Claims 1-21 are pending. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Specifically, claims 1-21 are directed to an abstract idea without additional elements amounting to significantly more than the abstract idea. With respect to Step 2A Prong One of the framework, claims 1, 5, 9, and 15 recite an abstract idea. Claims 1, 5, 9, and 15 include “ inputting a data set comprising a single-layer predictor; inputting the data set comprising a multi-layer predictor; processing the data set into a feature set; processing the data set into raw data; creating from the raw data, a prediction set with multiple layers; mapping the prediction set against the feature set to identify to what extent each feature contributed to a final prediction from the prediction set; based on the mapping, generating an explanation of the predictive behavior ”. The limitations above recite an abstract idea under Step 2A Prong One. More particularly, the elements above recite mental processes-concepts performed in the human mind (including an observation, evaluation, judgment, opinion) because the elements describe a process for generating an explanation of the predictive behavior. As a result, claims 1, 5, 9, and 15 recite an abstract idea under Step 2A Prong One. Claims 2-4, 6-8, 10-14, and 16-21 further describe the process for generating an explanation of the predictive behavior. As a result, claims 2-4, 6-8, 10-14, and 16-21 recite an abstract idea under Step 2A Prong One for the same reasons as stated above with respect to claims 1, 5, 9, and 15. With respect to Step 2A Prong Two of the framework, claims 1, 5, 9, and 15 do not include additional elements that integrate the abstract idea into a practical application. Claims 1, 5, 9, and 15 include additional elements that do not recite an abstract idea under Step 2A Prong One. The additional elements of claims 1, 5, 9, and 15 include a processor, artificial-intelligence, a shallow learning system, and a deep learning system. When considered in view of the claim as a whole, the additional elements do not integrate the abstract idea into a practical application because the additional computing elements are generic computing elements that are merely used as a tool to perform the recited abstract idea. As a result, claims 1, 5, 9, and 15 do not include additional elements that integrate the abstract idea into a practical application under Step 2A Prong Two. Claims 2, 6, 10, and 16 do not include any additional elements beyond those recited with respect to claims 1, 5, 9, and 15. As a result, claims 2, 6, 10, and 16 do not include additional elements that integrate the abstract idea into a practical application under Step 2A Prong Two for the same reasons as stated above with respect to claims 1, 5, 9, and 15. Claims 3-4, 7-8, 11-14, and 17-21 include additional elements that do not recite an abstract idea under Step 2A Prong One. The additional elements of claims 3-4, 7-8, 11-14, and 17-21 include a machine learning model, a neural network, a shallow learning system, and a natural language . When considered in view of the claims as a whole, the additional elements do not integrate the abstract idea into a practical application because the additional computing elements do no more than generally link the use of the recited abstract idea to a particular technological environment. As a result, claims 3-4, 7-8, 11-14, and 17-21 do not include additional elements that integrate the abstract idea into a practical application under Step 2A Prong Two. With respect to Step 2B of the framework, claims 1, 5, 9, and 15 do not include additional elements amounting to significantly more than the abstract idea. As noted above, claims 1, 5, 9, and 15 include additional elements that do not recite an abstract idea under Step 2A Prong One. The additional elements of claims 1, 5, 9, and 15 include a processor, artificial-intelligence, a shallow learning system, and a deep learning system. The additional elements do not amount to significantly more than the abstract idea because the additional computing elements are generic computing elements that are merely used as a tool to perform the recited abstract idea. Further, looking at the additional elements as an ordered combination adds nothing that is not already present when considering the additional elements individually. As a result, independent claims 1, 5, 9, and 15 do not include additional elements that amount to significantly more than the abstract idea under Step 2B. Claims 2, 6, 10, and 16 do not include any additional elements beyond those recited with respect to claims 1, 5, 9, and 15. As a result, claims 2, 6, 10, and 16 do not include additional elements that amount to significantly more than the abstract idea under Step 2B for the same reasons as stated above with respect to claims 1, 5, 9, and 15. Claims 3-4, 7-8, 11-14, and 17-21 include additional elements that do not recite an abstract idea under Step 2A Prong One. The additional elements of claims 3-4, 7-8, 11-14, and 17-21 include a machine learning model, a neural network, a shallow learning system, and a natural language . The additional elements do not amount to significantly more than the abstract idea because the additional computing elements do no more than generally link the use of the recited abstract idea to a particular technological environment. Further, looking at the additional elements as an ordered combination adds nothing that is not already present when considering the additional elements individually. As a result, claims 3-4, 7-8, 11-14, and 17-21 do not include additional elements that amount to significantly more than the abstract idea under Step 2B. Therefore, the claims are directed to an abstract idea without additional elements amounting to significantly more than the abstract idea. Accordingly, claims 1-21 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co. , 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-5, 7-9, 11-15, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Wei et al. (US Pat No. 12,079,719) (hereinafter Wei et al . ) in view of De et al. (US Pub No. 2021/0142169) (hereinafter De et al. ). Regarding claims 1 and 5, Wei discloses a method for generating explainable artificial-intelligence, the method comprising: inputting a data set into a shallow learning system, the shallow learning system comprising a single-layer predictor (see Wei, column 7, lines 26-35, wherein the first and second sigmoid outputs are combined in a wide and shallow neural network that provide a third sigmoid output as a final output of the machine learning model….obtained from one or more data sources for each of a plurality of patients as a dataset; column 4, lines 36-39, wherein each layer contains one or more nodes (alternatively referred to as "vertices") as representatively indicated by reference numeral 230. The vertices are connected by edges that represent weighting in the neural network); inputting the data set into a deep learning system, the deep learning system comprising a multi-layer predictor (see Wei, column 6, lines 20-24, wherein input features are concatenated in an embedding layer 420 which is utilized as an input to one or more multi-layer neural networks 425 that each utilize activation functions provided by a plurality of rectified linear units; and column 3, lines 53-56, wherein the model implements lifelong learning by integrating wide and deep learning components with traditional tree models to leverage results from prior model learnings into future predictions); processing, at the shallow learning system, the data set into a feature set (see Wei, column 7, lines 26-35, wherein the first and second sigmoid outputs are combined in a wide and shallow neural network that provide a third sigmoid output as a final output of the machine learning model….obtained from one or more data sources for each of a plurality of patients as a dataset; and column 7, lines 60-64, wherein utilize current non-sequential data from the healthcare dataset as non-sequential input features and provide predictions of identified patient profiles using a first multi-level neural network); processing, at the deep learning system, the data set into raw data (see Wei, column 3, line 65 & column 4, line 1, wherein receive real-world healthcare data 110 for patients from one or more data sources 120; and column 3, lines 53-56, wherein the model implements lifelong learning by integrating wide and deep learning components with traditional tree models to leverage results from prior model learnings into future predictions); creating, at the deep learning system, from the raw data, a prediction set with multiple layers (see Wei, column 3, line 65 & column 4, line 1, wherein receive real-world healthcare data 110 for patients from one or more data sources 120; column 3, lines 53-56, wherein the model implements lifelong learning by integrating wide and deep learning components with traditional tree models to leverage results from prior model learnings into future predictions; column 8, lines 1-2, wherein provide predictions of identified patient profiles using a second multi-level neural network); generating an explanation of the predictive behavior of the deep learning system (see Wei, column 4, lines 30-32, wherein establish terminology that is used in the detailed explanation of the inventive Bayesian deep neural network-based LML model 105; and column 1, lines 39-42, wherein Lifelong learning is implemented by dynamically integrating present learning from the wide and deep learning components with past learning from traditional tree models in the prior component into future predictions). Wei et al. fails to explicitly disclose mapping the prediction set against the feature set to identify to what extent each feature contributed to a final prediction from the prediction set; based on the mapping, generating an explanation of the predictive behavior of the deep learning system Analogous art De discloses mapping the prediction set against the feature set to identify to what extent each feature contributed to a final prediction from the prediction set (see De, para [0046], wherein the data cluster assembler 130 may implement the artificial intelligence component 212 to map the actual outcome to each of the case instances 206 to generate predictions; para [0029], wherein each of the plurality of instance clusters may be comprising the case instances that may be grouped together based on similarity features, which are represented by the hidden neuron contribution scores; para [0025], wherein the case instances may refer to various variable features associated with the case result prediction requirement); Analogous art De discloses based on the mapping, generating an explanation of the predictive behavior of the deep learning system (see De, para [0015], wherein explanation considers the structure of the model to derive an explanation for the prediction. The interpretability based approaches may aim to find trends and patterns to map inputs to the network predicted output; para [0081], wherein defaulting behavior and the behavior may have been deteriorating overtime; and para [0068], wherein decision rules from the decision tree 222 and may provide human interpretable explanation for deep learning prediction). Wei directed to a system for identifying patient profiles by continuously learning, as new data become available. De directed to improving prediction interpretation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Wei, regarding the System for Lifelong Machine Learning (LML) Model for Patient, to have included mapping the prediction set against the feature set to identify to what extent each feature contributed to a final prediction from the prediction set; based on the mapping, generating an explanation of the predictive behavior of the deep learning system because both inventions teach improving explanations for the model outcome. Further, the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claims 3, 7, 12, and 18, Wei discloses the method of claim 1 wherein the single-layer predictor is a machine learning model (see Wei, column 3, lines 50-53, wherein machine learning (LML) model is based on a Bayesian influencing concept that provides for updating model parameters dynamically through a sequence of model application and learning). Regarding claims 4, 8, 14, and 20, Wei discloses the method of claim 1 wherein the multi-layer predictor is a neural network (see Wei, column 2, lines 6-8, wherein a shallow and wide neural network which provides a final sigmoid output from the Bayesian deep neural network based LML model). Regarding claims 9 and 15, Wei discloses a system for generating explainable artificial-intelligence, the system comprising: a shallow learning system operable to: receive a data set (see Wei, column 7, lines 26-35, wherein the first and second sigmoid outputs are combined in a wide and shallow neural network that provide a third sigmoid output as a final output of the machine learning model….obtained from one or more data sources for each of a plurality of patients as a dataset; and column 4, lines 36-39, wherein each layer contains one or more nodes (alternatively referred to as "vertices") as representatively indicated by reference numeral 230. The vertices are connected by edges that represent weighting in the neural network); and process the data set into a feature set (see Wei, column 7, lines 26-35, wherein the first and second sigmoid outputs are combined in a wide and shallow neural network that provide a third sigmoid output as a final output of the machine learning model….obtained from one or more data sources for each of a plurality of patients as a dataset; and column 7, lines 60-64, wherein utilize current non-sequential data from the healthcare dataset as non-sequential input features and provide predictions of identified patient profiles using a first multi-level neural network); a deep learning system operable to: receive the data set (see Wei, column 6, lines 20-24, wherein input features are concatenated in an embedding layer 420 which is utilized as an input to one or more multi-layer neural networks 425 that each utilize activation functions provided by a plurality of rectified linear units; and column 3, lines 53-56, wherein the model implements lifelong learning by integrating wide and deep learning components with traditional tree models to leverage results from prior model learnings into future predictions); process the data set into raw data (see Wei, column 3, line 65 & column 4, line 1, wherein receive real-world healthcare data 110 for patients from one or more data sources 120; and column 3, lines 53-56, wherein the model implements lifelong learning by integrating wide and deep learning components with traditional tree models to leverage results from prior model learnings into future predictions); and create, from the raw data, a prediction set with multiple layers (see Wei, column 3, line 65 & column 4, line 1, wherein receive real-world healthcare data 110 for patients from one or more data sources 120; column 3, lines 53-56, wherein the model implements lifelong learning by integrating wide and deep learning components with traditional tree models to leverage results from prior model learnings into future predictions; column 8, lines 1-2, wherein provide predictions of identified patient profiles using a second multi-level neural network); a processor (see Wei, column 8, line 18, wherein a processor 805, a system memory 811) operable to: map the prediction set against the feature set; and based on the map: identify to what extent each feature, included in the feature set, contributed to a final prediction from the prediction set, as set forth above with claim 1; and generate an explanation of the predictive behavior of the deep learning system, as set forth above with claim 1. Regarding claims 11 and 17, Wei discloses the system of claim 9 wherein the shallow learning system comprises a single-layer predictor (see Wei, column 7, lines 26-35, wherein the first and second sigmoid outputs are combined in a wide and shallow neural network that provide a third sigmoid output as a final output of the machine learning model….obtained from one or more data sources for each of a plurality of patients as a dataset; and column 4, lines 36-39, wherein each layer contains one or more nodes (alternatively referred to as "vertices") as representatively indicated by reference numeral 230. The vertices are connected by edges that represent weighting in the neural network). Regarding claims 13 and 19, Wei discloses the system of claim 9 wherein the deep learning system comprises a multi-layer predictor (see Wei, column 6, lines 20-24, wherein input features are concatenated in an embedding layer 420 which is utilized as an input to one or more multi-layer neural networks 425 that each utilize activation functions provided by a plurality of rectified linear units). Claims 2, 6, 10, 16, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Wei et al. (US Pat No. 12,079,719) (hereinafter Wei et al . ), in view of De et al. (US Pub No. 2021/0142169) (hereinafter De et al. ), and further in view of Garvey et al. (US Pub No. 2024/0354789) (hereinafter Garvey et al. ). Regarding claims 2 and 6, Wei discloses the method of claim 1 wherein the mapping, as set forth above with claim 1. Wei et al. fails to explicitly disclose utilizes a heatmap. Analogous art Garvey discloses a heatmap (see Garvey, para [0064], wherein generating heatmap). Wei directed to a system for identifying patient profiles by continuously learning, as new data become available. Garvey directed to quantitative split driven quote. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Wei, regarding the System for Lifelong Machine Learning (LML) Model for Patient, to have included a heatmap because both inventions teach improving the overall experience. Further, the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claims 10 and 16, Wei discloses the system of claim 9 wherein the processor, as set forth with claim 9. Wei et al. fails to explicitly disclose a heatmap of the feature set and a heatmap of the prediction set to map the prediction set against the feature set. Analogous art Garvey discloses a heatmap of the feature set and a heatmap of the prediction set to map the prediction set against the feature set (see Garvey, para [0064], wherein generating heatmap; para [0146], wherein Machine learning: identify patterns and features that are indicative of sentiment. The features may include the tokens (words/phrases) in the text, the sequence/order of the words, adjacency of one word to another, the part of speech of a token, and/or other semantic/linguistic attributes of the text; and para [0042], wherein UX tests may capture heatmap elements, which may include qualitative and/or quantitative test result data that is tied to a particular location within a webpage or application page…..Heatmap data may be captured by the user selecting a particular area within a page and inputting qualitative and/or quantitative data that is associated with the selected page region. The user may input one or more quotations that describes a positive attribute and/or a negative attribute of a particular user interface element on the page). One of ordinary skill in the art would have recognized that applying the known technique of Garvey would have yielded predictable results and resulted in an improved system for the same reasons as stated above with respect to claim 2. Regarding claim 21, Wei discloses the system of claim 15 wherein the explanation, as set forth above with claim 15. Wei et al. fails to explicitly disclose formatted in natural language. Analogous art Garvey discloses formatted in natural language (see Garvey, para [0068], wherein quotations in an unstructured natural language format). One of ordinary skill in the art would have recognized that applying the known technique of Garvey would have yielded predictable results and resulted in an improved system for the same reasons as stated above with respect to claim 2. Conclusion The prior arts made of record and not relied upon is considered pertinent to applicant's disclosure. (US Pub No. 2008/0046562; US Pub No. 2019/0379589; US Pub No. 2023/0130567; US Pub No. 2017/0323239; US Pub No. 2021/0406693; US Pub No. 2020/0104704; S De Cnudde, Y Ramon, D Martens, F Provost (Deep learning on big, sparse, behavioral data) - Big data, 2019 - journals.sagepub.com; and V Swamy, B Radmehr, N Krco, M Marras (Evaluating the explainers: black-box explainable machine learning for student success prediction in MOOCs)… - arXiv preprint arXiv …, 2022 - arxiv.org. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT HAFIZ A KASSIM whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-8534 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT 9:00 - 5:00 PM . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Rutao Wu can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT 571-272-6045 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HAFIZ A KASSIM/ Primary Examiner, Art Unit 3623 03/26/202 6
Read full office action

Prosecution Timeline

Sep 12, 2023
Application Filed
Mar 25, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602638
RISK MANAGEMENT SYSTEM AND RISK MANAGEMENT METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12586008
MANAGING HOTEL GUEST HOUSEKEEPING WITHIN AN AUTOMATED GUEST SATISFACTION AND SERVICES SCHEDULING SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12561706
SYSTEMS AND METHODS FOR MANAGING VEHICLE OPERATOR PROFILES BASED ON RELATIVE TELEMATICS INFERENCES VIA A TELEMATICS MARKETPLACE
2y 5m to grant Granted Feb 24, 2026
Patent 12548038
Realtime Busyness for Places
2y 5m to grant Granted Feb 10, 2026
Patent 12541724
SYSTEMS AND METHODS FOR TIME-SERIES FORECASTING
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
44%
Grant Probability
98%
With Interview (+53.7%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 338 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month