Prosecution Insights
Last updated: April 19, 2026
Application No. 18/718,998

ENABLING FEDERATED CONCEPT MAPS IN LOGICAL ARCHITECTURE OF DATA MESH

Non-Final OA §103
Filed
Jun 12, 2024
Examiner
BLACK, LINH
Art Unit
2163
Tech Center
2100 — Computer Architecture & Software
Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
OA Round
1 (Non-Final)
51%
Grant Probability
Moderate
1-2
OA Rounds
5y 1m
To Grant
62%
With Interview

Examiner Intelligence

Grants 51% of resolved cases
51%
Career Allow Rate
222 granted / 437 resolved
-4.2% vs TC avg
Moderate +12% lift
Without
With
+11.5%
Interview Lift
resolved cases with interview
Typical timeline
5y 1m
Avg Prosecution
40 currently pending
Career history
477
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
64.0%
+24.0% vs TC avg
§102
16.5%
-23.5% vs TC avg
§112
3.3%
-36.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 437 resolved cases

Office Action

§103
DETAILED ACTION This communication is in response to the application filed 6/12/2024. Claims 1-17 are pending in the application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 6/12/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 6-11, 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Zhong et al. (US 20210303638) in view of Green et al. (US 20220391778). As per claims 1, 14, 16-17, Zhong et al. (US 20210303638) teaches a computer-implemented method of searching a plurality of data sources (para. 15-16: performing the search and/or the context of search, and some or all of the ranked jobs are returned as search results to the user; fig. 1: search engine, data repository), the method comprising: obtaining, from a first local Machine Learning (ML) model, a first set of word embeddings corresponding to a first relationship mapping of a first plurality of documents from a first data source; obtaining, from a second local ML model, a second set of word embeddings corresponding to a second relationship mapping of a second plurality of documents from a second data source (para. 52-55: the semantic similarity (or distance) between input string and standardized entities is evaluated using an input string embedding that is generated from input string and a set of entity embeddings generated from standardized entities; input string embedding and entity embeddings are generated by applying an embedding model to input string and standardized entities, respectively. For example, embedding model includes a word2vec model, fastText model, Global Vectors for Word Representation (GloVe) model, Embeddings from language models (ELMo) model, transformer, convolutional neural network, recurrent neural network, and/or another type of machine learning model; para. 60-61: embedding repository may include a key-value store. After entity embeddings are created by embedding model from a set of standardized entities, model-creation apparatus may store a mapping between each standardized entity and the corresponding embedding in the key-value store; para. 66-67: sets of clusters wherein each cluster in the hierarchy is composed of embeddings of standardized entities); generating a first latent space representation by processing the first set of word embeddings using a first artificial neural network (ANN) trained with the first local ML model, wherein the first latent space representation comprises a plurality of first contexts associated with the first set of word embeddings (para. 2: artificial neural networks; para. 53: types of artificial neural networks: convolutional neural network, recurrent neural network, and/or another type of machine learning mode; para. 19: after the embedding model is trained, the embedding model generates embeddings that are closer in the latent space for a given input string-standardized entity pair with a positive label. Conversely, the embedding model produces embeddings that are farther apart in the latent space for a given input string-standardized entity pair with a negative label. Thus, the distances between embeddings of a standardized entity and an input string may reflect semantic similarities or dissimilarities between the standardized entity and input string; fig. 5: apply the embedding model to the standardized entities to generate embeddings for the standardized entities. Produce a set of clusters of the embeddings at a lowest level of the hierarchy. Merge subsets of the clusters into another set of clusters at a higher level of the hierarchy); generating a second latent space representation by processing the second set of word embeddings using a second ANN trained with the second local ML model, wherein the second latent space representation comprises a plurality of second contexts associated with the second set of word embeddings (para. 17-19: the embedding model generates embeddings that are closer in the latent space for a given input string-standardized entity pair with a positive label. Conversely, the embedding model produces embeddings that are farther apart in the latent space for a given input string-standardized entity pair with a negative label. Thus, the distances between embeddings of a standardized entity and an input string may reflect semantic similarities or dissimilarities between the standardized entity and input string; para. 47-48: one or more components may track searches, clicks, views, text input, conversions, and/or other feedback during the entities' interaction with the online system. The feedback may be stored in data repository and used as training data for one or more machine learning models, and the output of the machine learning model(s). Moreover, standardization of fields in data may improve analysis of the data by the machine learning model(s), as well as use of data with products in and/or associated with the online system; para. 66: second set of clusters. The first, second, third, and fourth sets of clusters are disjoint); correlating the first set of word embeddings and the second set of word embeddings based on the plurality of first contexts and the plurality of second contexts; aggregating, based on the correlating, the first set of word embeddings and the second set of word embeddings into a global Machine Learning (ML) model of word embeddings (para. 17-18: an embedding model is trained to semantically associate standardized entities with raw input strings that have substantially the same meaning. The embedding model includes one or more embedding layers that convert words and/or sequences of words in each input string into an embedding that is a vector representation of the input string in a lower dimensional vector space. To allow the embedding model to learn semantic relationships between the input strings and standardized entities, the embedding model is trained to predict outcomes associated with pairs of the input strings and standardized entities; para. 59-60: obtains an entity embedding for the standardized entity from the last hidden layer produced by the BERT model from a classification token that is added to the beginning of the sequence. Model-creation apparatus also, or instead, uses a max pooling and/or other operation to aggregate vectors in the last hidden layer of the BERT model produced from individual tokens in the sequence to generate an entity embedding for the standardized entity); obtaining a search query, the search query comprising a context and one or more word embeddings; generating a response to the query using the global ML model; and outputting the generated response to a user (para. 20-22: when a new input string is received (e.g., in a search term submitted by a user), an embedding of the input string from the embedding model is compared to the embeddings of the standardized entities to identify a number of standardized entities with embeddings that are closest to the input string's embedding in the vector space; para. 53: embedding model includes a word2vec model, fastText model, Global Vectors for Word Representation (GloVe) model, Embeddings from language models (ELMo) model, transformer, convolutional neural network, recurrent neural network, and/or another type of machine learning model). Even if Zhong does not explicitly teach a global Machine Learning (ML) model, Green et al. teaches said limitation at para. 72; para. 7: federated learning of embeddings. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Zhong to include a global Machine Learning (ML) model of Green to effectively learn and identify from datasets of different sources patterns that help automate and/or data driven decisions needed for the data processing system/applications. As per claims 2, 15, Zhong teaches wherein the first data source has a location that is different than a location of the second data source (para. 39-40, 83-84: a number of machine learning models and/or techniques may be used to generate input string embedding, entity embeddings, clusters, hierarchy, match scores, and/or output. For example, the functionality of embedding model may be provided by various types of neural network, deep learning, and/or embedding model architectures. Multiple versions of embedding model may be adapted to different entity types and/or sources of user-provided input strings (e.g., posted jobs, searches, user profiles, etc. Thus, different data sources have different locations)). As per claim 6, Zhong teaches wherein the ANN is a Bidirectional Encoder Representation from Transformers (BERT) language model (para. 60: a Bidirectional Encoder Representations from Transformers (BERT) model and/or another type of bidirectional transformer encoder. Model-creation apparatus 210 obtains an entity embedding for the standardized entity from the last hidden layer produced by the BERT model from a classification token that is added to the beginning of the sequence). As per claim 7, Zhong teaches selecting a first word embedding in the first set of word embeddings having a first corresponding context; selecting a second word embedding in the second set of word embeddings having a second corresponding context; determining that the first word embedding is the same or substantially the same as the second word embedding and that the first corresponding context is the same or substantially the same as the second corresponding context (para. 17: an embedding model is trained to semantically associate standardized entities with raw input strings that have substantially the same meaning); in response to the determination that the first word embedding is the same or substantially the same as the second word embedding and that the first corresponding context is the same or substantially the same as a the second corresponding context, averaging the first word embedding and the second word embedding in the global ML model (para. 59: one or more rows represented by the index(es) in the weight matrix are then retrieved, and an entity embedding for the standardized entity is produced from values of the rows (e.g., by averaging or otherwise aggregating the rows into a single vector); para. 68). Green also teaches said limitations at para. 64-67. As per claim 8, Zhong teaches selecting a third word embedding in the first set of word embeddings having a third corresponding context; selecting a fourth word embedding in the second set of word embeddings having a fourth corresponding context; determining that the third word embedding is the same or substantially the same as the fourth word embedding and that the third corresponding context is not the same or substantially the same as the third corresponding context (para. 63-66: identify closest embeddings, analysis apparatus performs a top-down search of hierarchy, beginning at the highest level and ending at the lowest level. At the highest level of hierarchy, analysis apparatus identifies one or more clusters that are closest to input string embedding (e.g., based on distances between the centroids of clusters in the highest level and input string embedding in the embedding space). Analysis apparatus recursively repeats the process with additional clusters that are grouped under the identified cluster(s) in a lower level of hierarchy until a cluster with a centroid that is closest to input string embedding is found in the lowest level of hierarchy); in response to the determination that the third word embedding is the same or substantially the same as the fourth word embedding and that the third corresponding context is not the same or substantially the same as a the third corresponding context, maintaining the first word embedding and the second word embedding in the global ML model (para. 66: cluster 1 310 includes a third set of clusters (e.g., cluster 1 322, cluster F 324), and cluster C 312 includes a fourth set of clusters (e.g., cluster 1 326, cluster G 328). The first, second, third, and fourth sets of clusters are disjoint). As per claim 9, Zhong does not teach said claim. Green teaches obtaining a public dataset, the public dataset comprising a plurality of public search queries and corresponding public outputs; and for each public search query in the public dataset, determining a conditional probability of each word in the global ML model appearing in the corresponding public output (para. 41-43: generating user embeddings locally on the client devices which, optionally in cooperation with a global model trained by federated learning, provide representations of characteristics of the user or users of the particular client devices. As one example, the user embedding can be jointly learned as an input to the machine-learned model alongside positive embeddings. Federated learning leverages the computational power of a large number of computing devices (e.g., user mobile devices) to improve the overall abilities of the interconnected computing system formed thereby. As one example, the techniques described herein enable the effective training of a machine-learned embeddings used to perform a computational task (e.g., an image processing, computer vision task, sensor data processing task, audio processing task, text processing task, classification task, detection task, recognition task, data search task, etc.); determining, for each search query in the public dataset, a corresponding context using the global ML model; and updating the conditional probabilities based on the determined corresponding context (para. 64-67: the client device may transmit information indicative of the updates to a server for updating the global embeddings. The information indicative of the updates may be provided to the server for aggregation in a federated learning algorithm. Updates computed for the global machine-learned model may also incorporate a variable learning rate. In general, the learning rate scales the magnitude of the update calculated for an embedding; para. 90: the output of each of the models shown FIGS. 3A and 3B can be a respective probability or other measure of match for each possible entity. A highest probability or best match can be selected as the ultimate output prediction). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Zhong to include a global Machine Learning (ML) model of Green to effectively learn and identify from datasets of different sources patterns that help automate and/or data driven decisions needed for the data processing system/applications. As per claim 10, Zhong does not teach said claim. Green teaches identifying the context of the obtained search query using the global ML model; and predicting, based on the identified context and the global ML model, one or more words to include in the response to the query (para. 57, 97: the user embeddings may be used locally on the client device to improve the performance of the globally trained embeddings. For instance, in predictive tasks, a user-specific context vector may adapt a global prediction based on a user's idiosyncratic tendencies. For example, a global model may provide for the prediction of likely items of interest within an application, and a local user embedding may modify and/or augment the prediction based on information specific to the user; para. 122). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Zhong to include a global Machine Learning (ML) model of Green to effectively learn and identify from datasets of different sources/user devices for similar patterns that allow the system to provide more accurate search results. As per claim 11, Zhong teaches arranging the one or more words using a natural language generation (NLG) model, wherein the generated response comprises the arrangement of the one or more words (para. 17, 23: provide technological and performance improvements in computer systems, applications, user experiences, tools, platforms, and/or technologies related to natural language processing, processing user input, retrieving documents, conducting searches, and/or generating recommendations). Claim(s) 3-5 are rejected under 35 U.S.C. 103 as being unpatentable over Zhong et al. (US 20210303638) in view of Green et al. (US 20220391778) and further in view of Conti et al. (US 20200175390). As per claims 3-4, Zhong and Green do not teach said claim. Conti teaches wherein the first relationship mapping comprises a first plurality of tuples and the second relationship mapping comprises a second plurality of tuples, wherein each tuple comprises a first entity, a second entity, and a relationship between the first entity and the second entity; wherein the first local ML model predicts a relationship between a first entity and a second entity in the first plurality of tuples, and the second local ML model predicts a relationship between a first entity and a second entity in the second plurality of tuples (para. 58-61: for custD, the relevant row (tuple) 404 would be “custD 9/16 Walmart NY Stationery ‘Crayons, Folders’ 25”. In the vector space, the word vector of custD is more similar to the word vector of custB as both bought stationery, including crayons. Likewise, the word vector of custA is more similar to the word vector of custC as both bought fresh produce, including bananas; para. 65: By mapping known data profiles associated with meaningful word embedding models to parameters used to generate the meaningful word embedding models, relationships between data profiles and parameters that lead to meaningful word models may emerge). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Zhong and Green to include entities’ relationship mappings of Conti in order to improve data management, uncover patterns and/or connections of businesses for better business decisions. As per claim 5, Zhong et al. teaches wherein the first local ML model and the second local ML model comprise continuous bag of words (CBOW) models (para. 94-96: the trained embedding model is then applied to the standardized entities to generate embeddings for the standardized entities. For example, a bag-of-words… Operation 510 may be repeated to continue creating the hierarchy). Claim(s) 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Zhong et al. (US 20210303638) in view of Green et al. (US 20220391778) and further in view of Yoshihama (US 20130152158). As per claim 12, Zhong does not teach said claim. Green teaches at marking a word embedding in the global ML model (para. 72: a global Machine Learning (ML) model; para. 7: federated learning of embeddings). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Zhong to include a global Machine Learning (ML) model of Green to effectively learn and identify from datasets of different sources patterns that help automate and/or data driven decisions needed for the data processing system/applications. Zhong and Green do not teach as comprising an anonymized word, wherein the marked word embedding corresponds to an anonymized word present in one or more of the first plurality of documents Yoshihama teaches at para. 51: a replacement word made by merely replacing a string at the left of "@" mark at random might be an email address actually used; thus, the email address can be anonymized, for example, by replacing the string with "*" (asterisk) or "!" (exclamation mark) in such a way that the replacement word can be recognized as an email address; para. 59: the confidentiality level can be associated with the template as a structured document such as an XML by parsing and converting the template into a layered structure of word/string/regular expression, or more simply by registering the confidentiality level in a table having a structure of, for example, [template identification value, the number of words from beginning, confidential, the number of words from beginning, non-confidential, the number of words from beginning, confidential]. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Zhong and Green to include the teachings of Yoshihama to effectively learn and identify from datasets of different sources confidential words to protect data privacy. As per claim 13, Zhong and Green do not teach said claim. Yoshihama teaches. determining that the generated response comprises the anonymized word; in response to the determining, identifying a document in the first plurality of documents having a frequency of the anonymized word that is greater than a frequency of the anonymized word in any other document in the first plurality of documents; and providing the identified document to the user (para. 69-71: when the co-appearance frequencies between a string in the confidential portion (A) and a string in the variable portion (B) of which confidential level is unknown are not less than a certain threshold value of TH1, and at the same time the co-appearance frequencies between the string in the variable portion (B) and the string other than those in the confidential portion (A with upper bar) are not more than the threshold value TH2, the currently determining unknown variable portion may be estimated to be confidential. The reason adopting the processing by the above logical condition is, for example, when the value of the variable portion is an individual name which is confidential information, the strings (for example, birthday, e-mail address, password of the individual) which appears together with the individual name in high frequencies should be considered to be confidential; para. 74). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Zhong and Green to include the teachings of Yoshihama to effectively learn and identify from datasets of different sources confidential words to protect data privacy. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Pardeshi et al. (US 20210397971) teaches at para. 48, 56: data can come from various sources. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LINH BLACK whose telephone number is (571)272-4106. The examiner can normally be reached 9AM-5PM EST M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tony Mahmoudi can be reached on 571-272-4078. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LINH BLACK/Examiner, Art Unit 2163 /TONY MAHMOUDI/Supervisory Patent Examiner, Art Unit 2163
Read full office action

Prosecution Timeline

Jun 12, 2024
Application Filed
Jan 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602376
SYSTEMS AND METHODS FOR DATA CURATION IN A DOCUMENT PROCESSING SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12530339
DISTRIBUTED PLATFORM FOR COMPUTATION AND TRUSTED VALIDATION
2y 5m to grant Granted Jan 20, 2026
Patent 12468835
SYSTEM AND METHOD FOR SESSION-AWARE DATASTORE FOR THE EDGE
2y 5m to grant Granted Nov 11, 2025
Patent 12461923
SUITABILITY METRICS BASED ON ENVIRONMENTAL SENSOR DATA
2y 5m to grant Granted Nov 04, 2025
Patent 12450239
METHODS AND APPARATUS FOR IMPROVING SEARCH RETRIEVAL
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
51%
Grant Probability
62%
With Interview (+11.5%)
5y 1m
Median Time to Grant
Low
PTA Risk
Based on 437 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month