Prosecution Insights
Last updated: April 19, 2026
Application No. 18/460,886

Embedding Entity Matching

Non-Final OA §103
Filed
Sep 05, 2023
Examiner
MORRIS, JOHN J
Art Unit
2152
Tech Center
2100 — Computer Architecture & Software
Assignee
Crowdstrike Inc.
OA Round
3 (Non-Final)
61%
Grant Probability
Moderate
3-4
OA Rounds
4y 0m
To Grant
81%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
167 granted / 273 resolved
+6.2% vs TC avg
Strong +20% interview lift
Without
With
+20.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
21 currently pending
Career history
294
Total Applications
across all art units

Statute-Specific Performance

§101
11.6%
-28.4% vs TC avg
§103
62.0%
+22.0% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 273 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This Office Action corresponds to application 18/460,886 which was filed on 9/5/2023. Claims 1-20 are currently pending. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/23/2026 has been entered. Response to Amendment In the reply filed 1/23/2026, claims 1, 8, and 15 have been amended. Claim 2 has been cancelled and no new claims have been added. Accordingly claims 1 and 3-20 stand pending. Response to Arguments Applicant's arguments filed 1/26/2026 have been fully considered but are moot in view of new grounds of rejection. The applicant argues that it is improper to combine He with Dharaskar since He uses character-level layers and word-level information and Dharaskar converts dataset values into text strings and then converts the text strings into n-grams. The examiner respectfully disagrees. Both He and Dharaskar are in the field of data analysis and are addressing the problem of entity matching. The character-level layers and word-level information teachings of He are used for entity matching and the converting dataset values into text strings and then into n-grams teachings of Dharaskar are used for extracting entities from datasets. Using the extracted entities of Dharaskar for the entity matching teachings of He does not prevent He from using character-level and word-level information to determine common entities and therefore does not destroy the principle operation of He. Therefore, the examiner is not persuaded. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-6, 8-10, 13-15, 17-18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over He et al. (US2020/0272845, previously presented in ‘892), hereinafter He, in view of Dharaskar et al. (US2023/0325418, previously presented in ‘892), hereinafter Dharaskar, and Ritz et al. (US2024/0221045), hereinafter Ritz. Regarding Claim 1: He teaches: A method executed by a computer system for an entity matching (He, abstract, figures 1 and 6), comprising: receiving, by the computer system, different datasets associated with the entity matching (He, abstract, figure 5, [0003, 0011], note a user may be an entity; note matching entities from different data sources); generating, by the computer system, entity embeddings using a machine learning entity embedding model and entity byte n-grams (He, abstract, figures 1 and 3-5, [0033-0035, 0044-0052, 0095-0100], note evaluating data sources to identify common attributes types and selecting attribute entity matching models; note using attribute entity matching models to determine a set of weighted scores for attributes pairs, which is interepted as an entity embedding using an entity embedding model using the extracted attributes from the different data sources; note the use of hierarchical deep models which are interpreted as machine learning entity embedding models. When combined with the reference cited below it would be for the n-grams from the extracted n-grams); and determining, by the computer system, a common entity associated with the different datasets based on the entity embeddings generated using the entity byte n-grams (He, abstract, figures 1 and 5, [0033-0037, 0044-0052, 0095-0100], note using the entity embeddings with the similarity to determine a common entity; note this uses the entity n-grams extracted from the datasets, when combined with the reference cited below it would be for the n-grams from the extracted n-grams). While He teaches entity matching, He doesn’t specifically teach generating, by the computer system, a text string by textually concatenating entity attributes read from the different datasets associated with the entity matching; extracting, by the computer system, consecutive characters as n-grams from the text string generated by the textually concatenating of the entity attributes. However, Dharaskar is in the same field of endeavor, data management, and Dharaskar teaches: generating, by the computer system, a text string by textually concatenating entity attributes read from the datasets associated with the entity matching (Dharaskar, figure 3, [0009-0010, 0015-0016], note converting the dataset values in a subset of columns in the first and second datasets to respective text strings. When combined with the previously cited references, this would be for the dataset inputs as taught by He); extracting, by the computer system, consecutive characters as n-grams from the text string generated by the textually concatenating of the entity attributes (Dharaskar, figure 3, [0009-0010, 0015-0016], note converting the dataset values in a subset of columns in the first and second datasets to respective text strings and converting each text string to one or more n-grams. The converting of the text string into one or more n-grams is interpreted as extracting consecutive characters as n-grams from the text string. When combined with the previously cited references, this would be for the dataset inputs as taught by He); generating, by the computer system, entity embeddings using a machine learning entity embedding model and entity byte n-grams representing the consecutive characters extracted as the n-grams from the text string generated by the textually concatenating of the entity attributes (Dharaskar, figure 3, [0008-0012, 0018, 0066], note using a machine learning model and the extracted n-grams to generate embeddings. When combined with the previously cited references, this would be for the dataset inputs as taught by He); and determining, by the computer system, a common entity associated with the different datasets based on the entity embeddings generated using the entity byte n-grams representing the consecutive characters extracted as the n-grams from the text string generated by the textually concatenating of the entity attributes (Dharaskar, figure 3, [0007-0012, 0018, 0066], note comparing the similarity metrics to determine if they are a common entity. When combined with the previously cited references, this would be for the dataset inputs as taught by He); It would have been obvious to one of ordinary skill in the art at the time the invention was made to have incorporated the teachings of Dharaskar because all references are directed to data management and because Dharaskar would expand upon the teachings of the previously cited references in entity matching which would improve the usability of the system by enabling the use of different platforms with different formats (Dharaskar, [0004-0005]). While He as modified teaches entity matching with concatenating entity attributes, He as modified doesn’t specifically teach the concatenated entity attributes are from the different datasets associated with the entity matching. However, Ritz is in the same field of endeavor, data management, and Ritz teaches: generating, by the computer system, a text string by textually concatenating entity attributes read from the different datasets associated with the entity matching (Ritz, figures 1-3 and 6, [0030-0031], note merging two different datasets associated with entity matching, when combined with the previously cited references this would be the received datasets as taught by He and then be used for the textual concatenating of Dharaskar which would teach converting/generating a text string by concatenating attributes from different datasets). It would have been obvious to one of ordinary skill in the art at the time the invention was made to have incorporated the teachings of Ritz because all references are directed to data management and because Ritz would expand upon the teachings of the previously cited references in entity matching which would improve the usability of the system by enabling analysis of different datasets (Ritz, [0004]). Regarding Claim 3: He as modified shows the method as disclosed above; He as modified further teaches: generating an entity embedding matrix based on the entity embeddings (He, abstract, figures 1 and 5, [0033-0035, 0044-0052, 0095-0100], note evaluating data sources to identify common attributes types and selecting attribute entity matching models; note using attribute entity matching models to determine a set of weighted scores for attributes pairs, which is interepted as an entity embedding matrix). Regarding Claim 4: He as modified shows the method as disclosed above; He as modified further teaches: generating entity similarities by transposing an entity embedding matrix associated with the entity embeddings (He, abstract, figures 1 and 5, [0033-0035, 0044-0052, 0095-0100], note evaluating data sources to identify common attributes types and selecting attribute entity matching models; note using attribute entity matching models to determine a set of weighted scores for attributes pairs, which is interepted as an entity embedding matrix; note using attribute entity matching models to determine a set of weighted scores for attributes pairs to determine entity similarities) (Dharaskar, figure 3, 7, and 10, [0007-0012, 0062-0065, 0078], note the similarity metrics/matrix include cosine similarity between the matrices and to calculate cosine similarity it requires the transposition and multiplication of the entity embedding matrix); It would have been obvious to one of ordinary skill in the art at the time the invention was made to have incorporated the teachings of Dharaskar because all references are directed to data management and because Dharaskar would expand upon the teachings of the previously cited references in entity matching which would improve the usability of the system by enabling the use of different platforms with different formats (Dharaskar, [0004-0005]). Regarding Claim 5: He as modified shows the method as disclosed above; He as modified further teaches: filtering the entity similarities according to a threshold value (He, figures 1 and 5, [0035, 0046, 0098], note evaluating the set of weighted scores/aggregate score to determine a common entity that has a probability of a match being outside a threshold; note this limitation is nonfunctional descriptive material as explained in section 2111.05 of the MPEP and does not hold patentable weight. Since the entities are linked based on the score being above a threshold value it is interpreted as filtering out the entities with scores below the threshold value for identifying common entities). Regarding Claim 6: He as modified shows the method as disclosed above; He as modified further teaches: generating an entity similarity matrix based on the entity embeddings (Dharaskar, figure 3 and 7, [0007-0012, 0065], note generating an entity similarity matrix). It would have been obvious to one of ordinary skill in the art at the time the invention was made to have incorporated the teachings of Dharaskar because all references are directed to data management and because Dharaskar would expand upon the teachings of the previously cited references in entity matching which would improve the usability of the system by enabling the use of different platforms with different formats (Dharaskar, [0004-0005]). Regarding Claim 8: He teaches: A computer system that determines a common entity associated with different user accounts, comprising: a central processing unit (He, figure 6, note processor); and a memory device storing instructions that (He, figure 6, note memory), when executed by the central processing unit, perform operations, the operations comprising: receiving entity attributes read from the different user accounts associated with an entity matching (He, abstract, figures 1 and 5, [0003, 0011, 0022, 0095], note a user may be an entity; note extracting attributes associated with different user accounts, e.g., entities; note matching entities from different data sources); generating entity embeddings using a machine learning entity embedding model and entity byte n-grams (He, abstract, figures 1 and 3-5, [0033-0035, 0044-0052, 0095-0100], note evaluating data sources to identify common attributes types and selecting attribute entity matching models; note using attribute entity matching models to determine a set of weighted scores for attributes pairs, which is interepted as an entity embedding using an entity embedding model using the extracted attributes from the different data sources; note the use of hierarchical deep models which are interpreted as machine learning entity embedding models. When combined with the reference cited below it would be for the n-grams from the concatenated n-grams); determining entity similarities associated with the entity embeddings (He, abstract, figures 1 and 5, [0033-0035, 0044-0052, 0095-0100], note using attribute entity matching models to determine a set of weighted scores for attributes pairs to determine entity similarities); and determining the common entity associated with the different user accounts based on the entity similarities associated with the entity embeddings (He, abstract, figures 1 and 5, [0033-0037, 0044-0052, 0095-0100], note using the entity embeddings with the similarity to determine a common entity). While He teaches entity matching, He doesn’t specifically teach generating a text string by textually concatenating the entity attributes read from the different user accounts associated with the entity matching; extracting consecutive characters as n-grams from the text string generated by the textually concatenating of the entity attributes. However, Dharaskar is in the same field of endeavor, data management, and Dharaskar teaches: generating a text string by textually concatenating the entity attributes read from the user accounts associated with the entity matching (Dharaskar, figure 3, [0009-0010, 0015-0016], note converting the dataset values in a subset of columns in the first and second datasets to respective text strings; note the datasets may be from different platforms, which is interpreted to mean different user accounts. When combined with the previously cited references, this would be for the dataset inputs and user accounts as taught by He); extracting consecutive characters as n-grams from the text string generated by the textually concatenating of the entity attributes (Dharaskar, figure 3, [0009-0010, 0015-0016], note converting the dataset values in a subset of columns in the first and second datasets to respective text strings and converting each text string to one or more n-grams. The converting of the text string into one or more n-grams is interpreted as extracting consecutive characters as n-grams from the text string. When combined with the previously cited references, this would be for the dataset inputs as taught by He); generating entity embeddings using a machine learning entity embedding model and entity byte n-grams representing the consecutive characters as the n-grams from the text string generated by the textually concatenating of the entity attributes (Dharaskar, figure 3, [0008-0012, 0018, 0066], note using a machine learning model and the extracted n-grams to generate embeddings. When combined with the previously cited references, this would be for the dataset inputs as taught by He) determining entity similarities associated with the entity embeddings (Dharaskar, figure 3, [0008-0012, 0018, 0066], note determining and comparing the similarity metrics. When combined with the previously cited references, this would be for the dataset inputs as taught by He); and determining the common entity associated with the different user accounts based on the entity similarities associated with the entity embeddings (Dharaskar, figure 3, [0008-0012, 0018, 0066], note comparing the similarity metrics to determine if they are a common entity. When combined with the previously cited references, this would be for the dataset inputs as taught by He); It would have been obvious to one of ordinary skill in the art at the time the invention was made to have incorporated the teachings of Dharaskar because all references are directed to data management and because Dharaskar would expand upon the teachings of the previously cited references in entity matching which would improve the usability of the system by enabling the use of different platforms with different formats (Dharaskar, [0004-0005]). While He as modified teaches entity matching with concatenating entity attributes, He as modified doesn’t specifically teach the concatenated entity attributes are from the different datasets associated with the entity matching. However, Ritz is in the same field of endeavor, data management, and Ritz teaches: generating a text string by textually concatenating the entity attributes read from the different user accounts associated with the entity matching (Ritz, figures 1-3 and 6, [0030-0031], note merging two different datasets associated with entity matching; note the datasets may be from different accounts. When combined with the previously cited references, this would be for the dataset inputs and user accounts as taught by He and then be used for the textual concatenating of Dharaskar which would teach converting/generating a text string by concatenating attributes from different accounts). It would have been obvious to one of ordinary skill in the art at the time the invention was made to have incorporated the teachings of Ritz because all references are directed to data management and because Ritz would expand upon the teachings of the previously cited references in entity matching which would improve the usability of the system by enabling analysis of different datasets (Ritz, [0004]). Regarding Claim 9: He as modified shows the system as disclosed above; He as modified further teaches: generating ranked entity similarities by ranking the entity similarities (He, abstract, figures 1 and 5, [0033-0035, 0044-0052, 0095-0100], note using attribute entity matching models to determine a set of weighted scores for attributes pairs to determine entity similarities; note using the weighted scores to generated an aggregate score for the entity pairs. The scores are interpreted as ranking the entity similarities). Regarding Claim 10: He as modified shows the system as disclosed above; He as modified further teaches: generating filtered entity similarities by filtering the entity similarities (He, figures 1 and 5, [0035, 0046, 0098], note evaluating the set of weighted scores/aggregate score to determine a common entity that has a probability of a match being outside a threshold; note this limitation is nonfunctional descriptive material as explained in section 2111.05 of the MPEP and does not hold patentable weight. Since the entities are linked based on the score being above a threshold value it is interpreted as filtering out the entities with scores below the threshold value for identifying common entities). Regarding Claim 13: He as modified shows the system as disclosed above; He as modified further teaches: generating an entity embedding matrix based on the entity embeddings (He, abstract, figures 1 and 5, [0033-0035, 0044-0052, 0095-0100], note evaluating data sources to identify common attributes types and selecting attribute entity matching models; note using attribute entity matching models to determine a set of weighted scores for attributes pairs, which is interepted as an entity embedding matrix). Regarding Claim 14: He as modified shows the system as disclosed above; He as modified further teaches: generating an entity similarity matrix based on the entity embeddings (Dharaskar, figure 3 and 7, [0007-0012, 0065], note generating an entity similarity matrix). It would have been obvious to one of ordinary skill in the art at the time the invention was made to have incorporated the teachings of Dharaskar because all references are directed to data management and because Dharaskar would expand upon the teachings of the previously cited references in entity matching which would improve the usability of the system by enabling the use of different platforms with different formats (Dharaskar, [0004-0005]). Regarding Claim 15: He teaches: A non-transitory memory device storing instructions that, when executed by a central processing unit, perform operations that determine a common entity associated with different user accounts (He, figures 1,5, and 6, note processor and memory), the operations comprising: receiving entity attributes read from the different user accounts associated with an entity matching (He, abstract, figures 1 and 5, [0003, 0011, 0022, 0095], note a user may be an entity; note extracting attributes associated with different user accounts, e.g., entities; note matching entities from different data sources); extracting the entity byte n-grams from the byte buffer (He, abstract, figures 1 and 5, [0022, 0033-0035, 0041, 0044-0052, 0095-0100], note extracting attributes associated with different user accounts, e.g., entities); sending the entity byte n-grams extracted from the byte buffer as inputs to a machine learning entity embedding model (He, abstract, figures 2-5, [0022, 0033-0035, 0041, 0044-0052, 0095-0100], note the entities n-grams from the data sources are used as inputs to the hierarchical deep models; note the use of hierarchical deep models which are interpreted as machine learning entity embedding models); receiving an entity embedding matrix generated by the machine learning entity embedding model using the entity byte n-grams extracted from the byte buffer (He, abstract, figures 1 and 5, [0033-0035, 0044-0052, 0095-0100], note evaluating data sources to identify common attributes types and selecting attribute entity matching models; note using attribute entity matching models to determine a set of weighted scores for attributes pairs, which is interepted as an entity embedding matrix using an entity embedding model using the extracted attributes from the different data sources); determining the common entity associated with the different user accounts based on entity similarities associated with the entity similarity matrix (He, abstract, figures 1 and 5, [0033-0037, 0044-0052, 0095-0100], note using the entity embeddings with the similarity to determine a common entity). While He teaches entity matching, He doesn’t specifically teach generating a text string by textually concatenating the entity attributes read from the different user accounts associated with the entity matching; extracting consecutive characters as n-grams from the text string generated by the textually concatenating of the entity attributes; storing a byte buffer with entity byte n-grams consecutively read from a bit string representing the consecutive characters extracted as the n-grams from the text strings generated by the textually concatenating of the entity attributes; generating a transposed entity embedding matrix by transposing the entity embedding matrix generated by the machine learning entity embedding model using the entity byte n-grams extracted from the byte buffer; generating an entity similarities by multiplying the entity embedding matrix with the transposed entity embedding matrix. However, Dharaskar is in the same field of endeavor, data management, and Dharaskar teaches: generating a text string by textually concatenating the entity attributes read from the different user accounts associated with the entity matching (Dharaskar, figure 3, [0003-0004, 0009-0010, 0015-0016], note converting the dataset values in a subset of columns in the first and second datasets to respective text strings; note the datasets may be from different platforms, which is interpreted to mean different user accounts. When combined with the previously cited references, this would be for the dataset inputs and user accounts as taught by He); extracting consecutive characters as n-grams from the text string generated by the textually concatenating of the entity attributes (Dharaskar, figure 3, [0009-0010, 0015-0016], note converting the dataset values in a subset of columns in the first and second datasets to respective text strings and converting each text string to one or more n-grams. The converting of the text string into one or more n-grams is interpreted as extracting consecutive characters as n-grams from the text string. When combined with the previously cited references, this would be for the dataset inputs as taught by He) storing a byte buffer with entity byte n-grams consecutively read from a bit string representing the consecutive characters extracted as the n-grams from the text strings generated by the textually concatenating of the entity attributes (Dharaskar, figure 3 and 10, [0009-0010, 0061], note concatenating textual n-grams and creating a corpus with the concatenating n-grams, which is interepted as storing in a byte buffer) extracting the entity byte n-grams from the byte buffer (Dharaskar, figure 3 and 10, [0009-0010, 0062-0063], note using the n-grams from the corpus with a machine learning model) sending the entity byte n-grams extracted from the byte buffer as inputs to a machine learning entity embedding model (Dharaskar, figure 3 and 10, [0009-0010, 0062-0063], note using the n-grams from the corpus with a machine learning model) receiving an entity embedding matrix generated by the machine learning entity embedding model using the entity byte n-grams extracted from the byte buffer (Dharaskar, figure 3, 7, and 10, [0007-0012, 0062-0065], note receiving entity embedding matrix from the machine learning model) generating a transposed entity embedding matrix by transposing the entity embedding matrix generated by the machine learning entity embedding model using the entity byte n-grams extracted from the byte buffer (Dharaskar, figure 3, 7, and 10, [0007-0012, 0062-0065, 0078], note the similarity metrics include cosine similarity between the matrices and to calculate cosine similarity it requires the transposition of the entity embedding matrix); generating an entity similarities by multiplying the entity embedding matrix with the transposed entity embedding matrix (Dharaskar, figure 3, 7, and 10, [0007-0012, 0062-0065, 0078], note the similarity metrics include cosine similarity between the matrices and to calculate cosine similarity it requires the transposition and multiplication of the entity embedding matrix) determining the common entity associated with the different user accounts based on entity similarities associated with the entity similarity matrix (Dharaskar, figure 3, [0003-0004, 0007-0012, 0018, 0066], note comparing the similarity metrics to determine if they are a common entity. When combined with the previously cited references, this would be for the dataset inputs as taught by He); It would have been obvious to one of ordinary skill in the art at the time the invention was made to have incorporated the teachings of Dharaskar because all references are directed to data management and because Dharaskar would expand upon the teachings of the previously cited references in entity matching which would improve the usability of the system by enabling the use of different platforms with different formats (Dharaskar, [0004-0005]). While He as modified teaches entity matching with concatenating entity attributes, He as modified doesn’t specifically teach the concatenated entity attributes are from the different datasets associated with the entity matching. However, Ritz is in the same field of endeavor, data management, and Ritz teaches: generating a text string by textually concatenating the entity attributes read from the different user accounts associated with the entity matching (Ritz, figures 1-3 and 6, [0030-0031], note merging two different datasets associated with entity matching; note the datasets may be from different accounts. When combined with the previously cited references, this would be for the dataset inputs and user accounts as taught by He and then be used for the textual concatenating of Dharaskar which would teach converting/generating a text string by concatenating attributes from different accounts). It would have been obvious to one of ordinary skill in the art at the time the invention was made to have incorporated the teachings of Ritz because all references are directed to data management and because Ritz would expand upon the teachings of the previously cited references in entity matching which would improve the usability of the system by enabling analysis of different datasets (Ritz, [0004]). Regarding Claim 17: He as modified shows the non-transitory memory device as disclosed above; He as modified further teaches: receiving entity embeddings representing the entity embedding matrix (He, abstract, figures 1 and 5, [0033-0035, 0044-0052, 0095-0100], note evaluating data sources to identify common attributes types and selecting attribute entity matching models; note using attribute entity matching models to determine a set of weighted scores for attributes pairs, which is interepted as an entity embedding matrix using an entity embedding model using the extracted attributes from the different data sources) (Dharaskar, figure 3, 7, and 10, [0007-0012, 0062-0065], note receiving entity embedding matrix from the machine learning model). It would have been obvious to one of ordinary skill in the art at the time the invention was made to have incorporated the teachings of Dharaskar because all references are directed to data management and because Dharaskar would expand upon the teachings of the previously cited references in entity matching which would improve the usability of the system by enabling the use of different platforms with different formats (Dharaskar, [0004-0005]). Regarding Claim 18: He as modified shows the non-transitory memory device as disclosed above; He as modified further teaches: filtering entity similarities associated with the entity similarity matrix (He, figures 1 and 5, [0035, 0046, 0098], note evaluating the set of weighted scores to determine a common entity that has a probability of a match being outside a threshold; note since the entities are linked based on the score being above a threshold value it is interpreted as filtering out the entities with scores below the threshold value for identifying common entities; note this limitation is nonfunctional descriptive material as explained in section 2111.05 of the MPEP and does not hold patentable weight). Regarding Claim 20: He as modified shows the non-transitory memory device as disclosed above; He as modified further teaches: comparing entity similarities to a threshold entity similarity value (He, figures 1 and 5, [0035, 0046, 0098], note comparing the set of weighted scores/aggregate score to a threshold value). Claim Rejections - 35 USC § 103 Claim(s) 7, 11, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over He in view of Dharaskar, Ritz, and Lan et al. (US2016/0364366, previously presented in ‘892), hereinafter Lan. Regarding Claim 7: He as modified shows the method as disclosed above; While He as modified teaches entity matching, He doesn’t specifically teach sorting the entity embeddings. However, Lan is in the same field of endeavor, entity matching, and Lan teaches: sorting the entity embeddings (Lan, figure 2, [0011], note sorting values of entities within the matrices; note this limitation is nonfunctional descriptive material as explained in section 2111.05 of the MPEP and does not hold patentable weight). It would have been obvious to one of ordinary skill in the art at the time the invention was made to have incorporated the teachings of Lan because all references are directed to content management and because Lan would expand upon the teachings of the previously cited references in entity matching which would improve the accuracy of data mining by performing entity matching when entity quantities of data sources are inconsistent (Lan, [0006]). Regarding Claim 11: He as modified shows the system as disclosed above; While He as modified teaches entity matching, He doesn’t specifically teach generating sorted entity embeddings by sorting the entity embeddings. However, Lan is in the same field of endeavor, entity matching, and Lan teaches: generating sorted entity embeddings by sorting the entity embeddings (Lan, figure 2, [0011], note sorting values of entities within the matrices; note this limitation is nonfunctional descriptive material as explained in section 2111.05 of the MPEP and does not hold patentable weight). It would have been obvious to one of ordinary skill in the art at the time the invention was made to have incorporated the teachings of Lan because all references are directed to content management and because Lan would expand upon the teachings of the previously cited references in entity matching which would improve the accuracy of data mining by performing entity matching when entity quantities of data sources are inconsistent (Lan, [0006]). Regarding Claim 16: He as modified shows the non-transitory memory device as disclosed above; While He as modified teaches entity matching, He doesn’t specifically teach sorting the entity embeddings. However, Lan is in the same field of endeavor, entity matching, and Lan teaches: sorting entity embeddings associated with the entity embedding matrix (Lan, figure 2, [0011], note sorting values of entities within the matrices; note this limitation is nonfunctional descriptive material as explained in section 2111.05 of the MPEP and does not hold patentable weight). It would have been obvious to one of ordinary skill in the art at the time the invention was made to have incorporated the teachings of Lan because all references are directed to content management and because Lan would expand upon the teachings of the previously cited references in entity matching which would improve the accuracy of data mining by performing entity matching when entity quantities of data sources are inconsistent (Lan, [0006]). Claim Rejections - 35 USC § 103 Claim(s) 12 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over He in view of Dharaskar, Ritz, and Zhong et al. (US2021/0303638, previously presented in ‘892), hereinafter Zhong. Regarding Claim 12: He as modified shows the system as disclosed above; While He as modified teaches entity matching, He doesn’t specifically teach searching the entity embeddings according to an embedding search space. However, Zhong, is in the same field of endeavor, entity matching, and Zhong teaches: searching the entity embeddings according to an embedding search space (Zhong, abstract, [0020], note searching entity embeddings according to an embedding search space). It would have been obvious to one of ordinary skill in the art at the time the invention was made to have incorporated the teachings of Zhong because all references are directed to content management and because Zhong would expand upon the teachings of the previously cited references in entity matching which would improve the quality/accuracy by performing semantic matching to standardized entities (Zhong, [0004, 0016]). Regarding Claim 19: He as modified shows the non-transitory memory device as disclosed above; While He as modified teaches entity matching, He as modified doesn’t specifically teach searching entity embeddings associated with the entity embedding matrix. However, Zhong, is in the same field of endeavor, entity matching, and Zhong teaches: searching entity embeddings associated with the entity embedding matrix (Zhong, abstract, [0020], note searching entity embeddings. When combined with the previously cited references this would be for the entity embedding matrix as taught by He and Dharaskar). It would have been obvious to one of ordinary skill in the art at the time the invention was made to have incorporated the teachings of Zhong because all references are directed to content management and because Zhong would expand upon the teachings of the previously cited references in entity matching which would improve the quality/accuracy by performing semantic matching to standardized entities (Zhong, [0004, 0016]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Mishra et al. (US2020/0226325), teaches concatenating n-grams from data sources for entity matching. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN J MORRIS whose telephone number is (571)272-3314. The examiner can normally be reached M-F 6:00-2:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Neveen Abel-Jalil can be reached at 571-270-0474. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOHN J MORRIS/Examiner, Art Unit 2152 2/27/2026 /NEVEEN ABEL JALIL/Supervisory Patent Examiner, Art Unit 2152
Read full office action

Prosecution Timeline

Sep 05, 2023
Application Filed
Apr 18, 2025
Non-Final Rejection — §103
May 15, 2025
Applicant Interview (Telephonic)
May 15, 2025
Examiner Interview Summary
Jul 18, 2025
Response Filed
Oct 23, 2025
Final Rejection — §103
Dec 11, 2025
Applicant Interview (Telephonic)
Dec 11, 2025
Examiner Interview Summary
Dec 27, 2025
Response after Non-Final Action
Jan 23, 2026
Request for Continued Examination
Jan 30, 2026
Response after Non-Final Action
Feb 27, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585666
CLOUD ENVIRONMENT DATA DISTRIBUTION
2y 5m to grant Granted Mar 24, 2026
Patent 12585630
METHOD AND APPARATUS FOR ANALYZING COVERAGE, BIAS, AND MODEL EXPLANATIONS IN LARGE DIMENSIONAL MODELING DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12536137
VALIDATING DATA FOR INTEGRATION
2y 5m to grant Granted Jan 27, 2026
Patent 12530369
RESUME BACKUP OF EXTERNAL STORAGE DEVICE USING MULTI-ROOT SYSTEM
2y 5m to grant Granted Jan 20, 2026
Patent 12524397
AUTOMATED BATCH GENERATION AND SUBSEQUENT SUBMISSION AND MONITORING OF BATCHES PROCESSED BY A SYSTEM
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
61%
Grant Probability
81%
With Interview (+20.1%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 273 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month