Prosecution Insights
Last updated: April 19, 2026
Application No. 18/095,297

COMBINING STRUCTURED AND SEMI-STRUCTURED DATA FOR EXPLAINABLE AI

Final Rejection §103
Filed
Jan 10, 2023
Examiner
MEIS, JON CHRISTOPHER
Art Unit
2654
Tech Center
2600 — Communications
Assignee
SAP SE
OA Round
4 (Final)
46%
Grant Probability
Moderate
5-6
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 46% of resolved cases
46%
Career Allow Rate
10 granted / 22 resolved
-16.5% vs TC avg
Strong +59% interview lift
Without
With
+59.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
30 currently pending
Career history
52
Total Applications
across all art units

Statute-Specific Performance

§101
24.9%
-15.1% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
10.6%
-29.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims 1, 5-8, 12-15, and 19-29 are pending. Claims 1, 8, and 15 are independent. This Application was published as US 20240232294. Apparent priority is 10 January 2023. Applicant’s amendments and arguments are considered but are either unpersuasive or moot in view of the new grounds of rejection that, if presented, were necessitated by the amendments to the Claims. This action is Final. Response to Arguments 35 USC 103 Applicant's arguments (pg. 8-11) with respect to training a single unified model explainer have been fully considered but they are not persuasive. Examiner agrees with Applicant’s arguments that Pai does not disclose training the prototype generator. However, Applicant states that “… Manchanda also does not teach training a model explainer…” (bottom of pg. 10). Examiner disagrees and asserts that Manchanda does clearly teach training a model explainer. For example: “[0060] As previously described, the automated XAI prediction process includes training a deep neural network (DNN) model to compute relevance or contribution of each feature of the first set of features while generating the XAI predictions and the explainable model (MXAI)…” Examiner argues that the combination of Pai and Manchanda do teach the discussed limitations. Therefore, the rejection is maintained. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 5-8, 12-15, and 19-29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pai et al. (US 20200279140 A1) in view of Manchanda et al. (US 20220405603 A1) and Tazzyman (“Neural Network models”). Regarding claim 1, Pai discloses: 1. A system comprising: a memory that stores instructions; (Fig. 11, MEMORY 1112 ) and one or more processors (Fig. 11, PROCESSOR(S) 1114 ) configured by the instructions to perform operations comprising: training a machine learning model (Fig. 1A “MODEL TRAINING SYSTEM 102”) using a first training dataset that comprises both tabular and text features; ("[0030]... In some examples, a data point corresponds to tabular data (e.g., data contained in a table) or other data that is convertible to vector(s) (e.g., integers) or sub-images of a larger image. In this way, data such as database strings or images, can be converted into vectors for representation within vector space.") transforming data of the first training dataset into a transformed dataset that comprises only tabular features by replacing the text features of the first training dataset with vectors; (“[0030]... In some examples, a data point corresponds to tabular data (e.g., data contained in a table) or other data that is convertible to vector(s) (e.g., integers) or sub-images of a larger image. In this way, data such as database strings or images, can be converted into vectors for representation within vector space.” ; "[0042]... In an example illustration, strings of words corresponding to a feature (e.g., sepal width) can be changed to a real number (e.g., 0 or 1) to be represented as a feature value in vector space." ) generating a second training set that labels the transformed dataset using outputs from the machine learning model generated from corresponding entries in the first training dataset; ("[0024]… To do so, particular data points of the MLM (e.g., inputs to the MLM and the resulting output classifications made by the MLM) are selected as prototypes for a prototype model, each may represent a particular class of output from the MLM…." – the labeled dataset can be considered a second dataset. ) training a single unified model explainer using the second training set, thereby enabling the model explainer to provide an explanation that includes interrelationships between the tabular features and the text features in a single explainer model; (The explainer consists of a Prototype Generator, Distance Component, Score Generator, and Report Generator (see Fig. 1A and [0045]). Fig. 1A shows that the output from the Trained Model is fed into the Explainer. “[0066] The training output data 206 may include outputs (e.g., classification or predictions) of the trained model 110. For example, when referencing output data of the trained model 110 herein, the training output data 206 may be included in this output data. Further, [0048] describes that the prototype generator generates a prototype model based on the training data for the machine learning model. Therefore, the explainer uses both the original training data and outputs of the trained model (which make up the “second training set”).) converting text features of a data instance to a numerical vector that comprises at least one hundred dimensions; ("[0030]... In this way, data such as database strings or images, can be converted into vectors for representation within vector space." A string would be understood by one of ordinary skill in the art to include text.) combining the numerical vector with numerical features of the data instance to generate combined data; ("[0030]... The set of input feature values may be represented using a feature vector. Each feature vector may capture multiple features and include corresponding values that may be used as inputs to the machine learning model and can be represented in vector space or feature space. ... In some examples, a data point corresponds to tabular data (e.g., data contained in a table) or other data that is convertible to vector(s) (e.g., integers) or sub-images of a larger image. In this way, data such as database strings or images, can be converted into vectors for representation within vector space." Pai teaches that multiple features can be used to create a feature vector, and that features can include tabular or text data.) providing the combined data as input to a model explainer; ("[0048] The prototype generator 112 generates prototypes, each of which may be a sub group, condensed, or compressed version of other data points used as inputs in a machine learning model " Prototype generator is part of the model explainer, and uses the data points as an input to generate prototypes.) receiving, from the model explainer, global model explanations and local model explanations for a machine learning model; and ("[0026]... A global explanation score may be computed by combining local explanation scores for multiple prototypes of the prototype model, such as by computing an average…" ) causing presentation in a user interface of at least one of the global model explanations and the local model explanations. ("[0045]... The report generator 128 may generate one or more reports and/or provide (e.g., over a computer network) a user interface to a client device, such that a user can view the one or more explanation scores and/or other information or metrics that give insight associated with feature importance or relevance. ..." ) Pai does not specifically teach that tabular and text data are combined to provide explanations that include interrelationships between tabular and text features, or use of feature vectors with at least a hundred dimensions. Pai also does not disclose that the explainer is a trained model. Manchanda discloses: combining the numerical vector with numerical features of the data instance to generate combined, data thereby enabling the model explainer to provide an explanation that includes interrelationships between the tabular features and the text features; (Fig. 6A discloses that input data contains Text Data and Numerical Data, which are combined into Features. See also Table 1 which shows examples of text and numerical data. See also: “[0032]…The system automatically creates features from structured and unstructured data….” [0043] also discusses processing the unstructured text to extract features – combining both text and tabular data into features would yield explanations that include text and tabular features. As a specific example, the system of Pai in view of Manchanda would rank the score of different features as shown in Pai fig. 4D-F. This ranking of features can be understood as showing interrelationships between the features.) Manchanda additionally discloses a dataset that comprises both text and tabular features (“[0053] At 1002, the method 1000 includes receiving an input data associated with a domain. The input data may include input training and test data (ITRAIN, ITEST) in form of one or more of textual data, structured data, unstructured data, and semi structured data.”) Pai and Manchanda are considered analogous art to the claimed invention because they are directed to Explainable AI. Additionally, Pai and Manchanda both disclose specific use of the system for explaining the reason a loan was denied. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Pai to use features combining text and numerical data. Doing so would have been beneficial so that all available data could be input to the models. It would have also been obvious to one of ordinary skill in the art to further use a test dataset as taught by Manchanda. Doing so would have been beneficial to verify the system’s performance. Manchanda additionally discloses: training a single unified model explainer using the second training set, thereby enabling the model explainer to provide an explanation that includes interrelationships between the tabular features and the text features in a single explainer model; (“[0060] As previously described, the automated XAI prediction process includes training a deep neural network (DNN) model to compute relevance or contribution of each feature of the first set of features while generating the XAI predictions and the explainable model (MXAI)…” ; see also “[0038] The XAI process or the method followed at the XAI system includes a data ingestion process, prediction process, feature engineering process, XAI prediction process and XAI relevance process. The data ingestion process includes receiving input data to train, test and provide predictions through automated machine learning algorithms. As described previously, the input data may include structured data, unstructured data, semi-structured data, and combinations thereof...” ; see also “[0007] … To perform each iteration of the plurality of iterations, the one or more hardware processors are configured by the instructions to process, in a forward pass, the input data through calculations based on weights of edges of the DNN generated through random number generators to predict an output; compare the output predicted in the forward pass with an actual output obtained from the input data to obtain an error;…” Pai and Manchanda are considered analogous art to the claimed invention because they are directed to Explainable AI. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Pai to use a trained DNN as an explainer model as taught by Manchanda. Doing so would have been beneficial so that it could be iteratively trained until predetermined criteria is met (Manchanda, [0049]) and so that the system is not limited by type of data processed (Manchanda, [0005]). This combination falls under combining prior art elements according to known methods to yield predictable results or use of known technique to improve similar devices (methods, or products) in the same way. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396. Manchanda does not disclose feature vectors with at least a hundred dimensions. Tazzyman discloses: converting text features of a data instance to a numerical vector that comprises at least one hundred dimensions; (“The embeddings can be of any number of dimensions; Word2Vec guidance is vague on this and suggests between 100 and 1000. Typically more dimensions = greater quality encoding, but there will be some limit beyond which you’ll get diminishing returns. We typically use 200 or 300.” Pg. 1 para 4) Pai, Manchanda, and Tazzyman are considered analogous art to the claimed invention because they are directed to Neural Networks. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Pai in view of Manchanda with to use feature vectors with at least a hundred dimensions as taught by Tazzyman. Doing so would have been beneficial for greater quality encoding. Regarding claim 5, Pai discloses: 5. The system of claim 1, wherein the causing of presentation in the user interface of at least one of the global model explanations and the local model explanations comprises causing presentation in the user interface of both the global model explanations and the local model explanations. ("[0096]... For example, charts similar to FIGS. 4A-4F may have been generated for this particular user using the user interface. The chart may indicated that the uppermost or highest scored feature (e.g., both locally and globally) was credit score, followed by crime.") Regarding claim 6, Pai discloses: 6. The system of claim 1, wherein the converting of the text features of the data instance to a numerical vector comprises providing the text features to a natural language processor (NLP). (“[0030]... In this way, data such as database strings or images, can be converted into vectors for representation within vector space.”) Pai does not disclose the use of a Natural Language Processor. Manchanda discloses: 6. The system of claim 1, wherein the converting of the text features of the data instance to a numerical vector comprises providing the text features to a natural language processor (NLP). ("[0043] As previously mentioned, for processing the unstructured text data, there is a need to create features. Herein, domain ontology is used to create features from the unstructured data. Domain ontology identifies the kind of classes that are present in a particular domain (associated with the problem) and the properties that are present. Based on the classes and the properties, the system may identify the categories of the features from the domain ontology.” See also [0044]. Using domain ontology to identify the domain and properties of the text would be considered Natural Language Processing by one of ordinary skill in the art.) See motivation statement for claim 1. Regarding claim 7, Pai discloses: 7. The system of claim 1, wherein the converting of the text features of the data instance to a numerical vector comprises: converting individual words of the text features to word vectors; and ("[0030]… In this way, data such as database strings or images, can be converted into vectors for representation within vector space."; "[0042]... In an example illustration, strings of words corresponding to a feature (e.g., sepal width) can be changed to a real number (e.g., 0 or 1) to be represented as a feature value in vector space." ) combining the word vectors for each text feature to generate the numerical vector for the text feature. (“ [0030]... The set of input feature values may be represented using a feature vector. Each feature vector may capture multiple features and include corresponding values that may be used as inputs to the machine learning model and can be represented in vector space or feature space. ...”) Claim 8 is a medium claim with limitations corresponding to the limitations of Claim 1 and is rejected under similar rationale. Claim 12 is a medium claim with limitations corresponding to the limitations of Claim 5 and is rejected under similar rationale. Claim 13 is a medium claim with limitations corresponding to the limitations of Claim 6 and is rejected under similar rationale. Claim 14 is a medium claim with limitations corresponding to the limitations of Claim 7 and is rejected under similar rationale. Claim 15 is a method claim with limitations corresponding to the limitations of Claim 1 and is rejected under similar rationale. Claim 19 is a method claim with limitations corresponding to the limitations of Claim 5 and is rejected under similar rationale. Claim 20 is a method claim with limitations corresponding to the limitations of Claim 6 and is rejected under similar rationale. Regarding claim 21, Pai discloses: 21. The system of claim 1, wherein the training of the machine learning model comprises training the machine learning model to generate outputs that are similar to labels provided in the first training dataset. “([0044] The trained model 110 may be generated by the training component 108 using the raw training data 104 and/or the training data 106. The trained model 110 may include one or more models, such as A/B models that are tested. Once it is determined that the trained model 110 has acceptable accuracy, the trained model 110 may be deployed (e.g., as the deployed model 124). The determination that a trained model 110 has acceptable accuracy or confidence may include a threshold accuracy, such as, for example and without limitation, 80%, 90%, 98%, etc. The threshold accuracy may be predefined by the model training system 102, or may be user defined.” See also: “[0030] A “data point” (e.g., a test point, test data point, or prototype) may refer to a set of input feature values that a machine learning model may use to determine a classification (a data point may further be associated with a label that corresponds to the classification…)” Regarding claim 22, Pai discloses: 22. The system of claim 1, wherein the training of the model explainer comprises training the model explainer to generate outputs that are similar to outputs provided by the machine learning model. (“[0032] A “prototype model” may refer to a behavioral model of prototypes that is configured to be used to interpret the behavior of one or more machine learning models. The prototypes may be data points, which may be generated by condensing a larger set of data points, such as training data of a machine learning model, while retaining critical properties of the original data…”) Additionally, Manchanda discloses: 22. The system of claim 1, wherein the training of the model explainer comprises training the model explainer to generate outputs that are similar to outputs provided by the machine learning model. (“[0007]…To perform each iteration of the plurality of iterations, the one or more hardware processors are configured by the instructions to process, in a forward pass, the input data through calculations based on weights of edges of the DNN generated through random number generators to predict an output; compare the output predicted in the forward pass with an actual output obtained from the input data to obtain an error; propagate back the error to update weights of edges in a backward pass, wherein propagating the error back facilitates in output prediction to be close to actual output by updating weights of edges in the DNN...”) See claim 1 for motivation statement. Regarding claim 23, Pai discloses: 23. The system of claim 1, wherein the model explainer generates results based on linear and non-linear functions of input variables without using hidden variables or feedback loops. “[0105] Per block 801, each training or other data point (e.g., within the training data stores of FIG. 1A or the vectors space 300) of a class is mapped or covered to a prototype candidate based on summation-based integer constraints.… [0106] These properties may be incorporated into the formulation of a set cover optimization problem, which may first set summation-based integer constraints as follows via an integer program: minimize PNG media_image1.png 248 405 media_image1.png Greyscale ” At least in part, the model explainer generates results for the data points based on the above equation, which includes linear and non-linear (exponential) variables. This is a simple equation rather than a machine learning model which uses hidden variables and feedback loops.) Claim 24 is a medium claim with limitations corresponding to the limitations of Claim 21 and is rejected under similar rationale. Claim 25 is a medium claim with limitations corresponding to the limitations of Claim 22 and is rejected under similar rationale. Claim 26 is a medium claim with limitations corresponding to the limitations of Claim 23 and is rejected under similar rationale. Claim 27 is a method claim with limitations corresponding to the limitations of Claim 21 and is rejected under similar rationale. Claim 28 is a method claim with limitations corresponding to the limitations of Claim 22 and is rejected under similar rationale. Claim 29 is a method claim with limitations corresponding to the limitations of Claim 23 and is rejected under similar rationale. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JON C MEIS whose telephone number is (703)756-1566. The examiner can normally be reached Monday - Thursday, 8:30 am - 5:30 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hai Phan can be reached on 571-272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JON CHRISTOPHER MEIS/Examiner, Art Unit 2654 /HAI PHAN/Supervisory Patent Examiner, Art Unit 2654
Read full office action

Prosecution Timeline

Jan 10, 2023
Application Filed
Feb 27, 2025
Non-Final Rejection — §103
Mar 24, 2025
Interview Requested
Mar 31, 2025
Applicant Interview (Telephonic)
Mar 31, 2025
Examiner Interview Summary
Apr 09, 2025
Response Filed
May 07, 2025
Final Rejection — §103
May 30, 2025
Interview Requested
Jun 09, 2025
Applicant Interview (Telephonic)
Jun 09, 2025
Examiner Interview Summary
Jun 24, 2025
Request for Continued Examination
Jun 30, 2025
Response after Non-Final Action
Jul 29, 2025
Non-Final Rejection — §103
Aug 27, 2025
Interview Requested
Sep 11, 2025
Applicant Interview (Telephonic)
Sep 11, 2025
Examiner Interview Summary
Sep 15, 2025
Response Filed
Oct 09, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603087
VOICE RECOGNITION USING ACCELEROMETERS FOR SENSING BONE CONDUCTION
2y 5m to grant Granted Apr 14, 2026
Patent 12579975
Detecting Unintended Memorization in Language-Model-Fused ASR Systems
2y 5m to grant Granted Mar 17, 2026
Patent 12482487
MULTI-SCALE SPEAKER DIARIZATION FOR CONVERSATIONAL AI SYSTEMS AND APPLICATIONS
2y 5m to grant Granted Nov 25, 2025
Patent 12475312
FOREIGN LANGUAGE PHRASES LEARNING SYSTEM BASED ON BASIC SENTENCE PATTERN UNIT DECOMPOSITION
2y 5m to grant Granted Nov 18, 2025
Patent 12430329
TRANSFORMING NATURAL LANGUAGE TO STRUCTURED QUERY LANGUAGE BASED ON MULTI-TASK LEARNING AND JOINT TRAINING
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
46%
Grant Probability
99%
With Interview (+59.0%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month