Prosecution Insights
Last updated: April 19, 2026
Application No. 18/047,047

METHODS AND SYSTEMS FOR RANKING TRADEMARK SEARCH RESULTS

Non-Final OA §101§103§112
Filed
Oct 17, 2022
Examiner
ALLEN, NICHOLAS E
Art Unit
2154
Tech Center
2100 — Computer Architecture & Software
Assignee
Camelot UK Bidco Limited
OA Round
5 (Non-Final)
77%
Grant Probability
Favorable
5-6
OA Rounds
3y 3m
To Grant
93%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
585 granted / 760 resolved
+22.0% vs TC avg
Strong +16% interview lift
Without
With
+16.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
68 currently pending
Career history
828
Total Applications
across all art units

Statute-Specific Performance

§101
22.7%
-17.3% vs TC avg
§103
50.6%
+10.6% vs TC avg
§102
16.1%
-23.9% vs TC avg
§112
4.7%
-35.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 760 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on November 19, 2025 has been entered. In response to Applicant’s claims filed on November 19, 2025, claims 1-20 are now pending for examination in the application. Response to Arguments This office action is in response to amendment filed 11/19/2025. In this action claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Keyagnaert et al. [1] (US Pub. No. 20180225363) and Keyagnaert et al. [2] (US Pub. No. 20200111021) and MAU et al. (US Pub. No. 20200401851) in further view of Wang et al. (US Pub. No. 20220046110) The Wang et al. reference has been added to address the amendment of resulting in the training data comprising an increased representation of trademark pairs that are marked as similar compared to the larger dataset. Applicant’s arguments: In regards to claim 1 on Pages 10, applicant argues “As recited by the claims, these two models are trained in accordance with particular techniques using particular sets of training data that are different from each other, and each model outputs a different set of information based on its respective training. Such features, among others recited in the claims, are inextricably tied to computer-based models that are trained in the particular manner as recited in the claims, and the utilization of such trained models as recited could not performed by a human. Accordingly, the claims cannot be performed in the mind of a human, either mentally or with use of a pen and paper, nor do the claims recite mathematical concepts.” Examiner’s Reply: Applicant argues that the amended claims comprises statutory subject matter. Examiner respectfully disagrees. Again, the generating of scores is a mental process preformed by the human mind using computer as a tool to compare trademarks for similarity using mathematical calculations and pairing them with addition elements like training data. The calculation of a similarity score is a mathematical concept. Applicant’s arguments: In regards to claim 1 on Pages 15, applicant argues graph data structure" In the instant claims, as noted above, the first trained model is trained in a specific fashion, which results in improvements to the performance of the model, improves the speed of training, and reduces processing resources utilized during training. See as- filed Specification at T [0042]. In other words, the ordered combination recited in the claims is directed to improvements with respect to the trained model as well as the computing system on which it is implemented. Thus, like claim 3 in Example 48, the claims of the instant application both relate to technical improvements and are integrated into a practical application. For at least these additional reasons, the claims of the instant application are patent eligible.” Examiner’s Reply: A claim limitation, under its broadest reasonable interpretation, covers a commercial interaction or mental process (analyzing trademarks), then it falls within the “Mental process” grouping of abstract ideas set forth in the 2019 PEG. Accordingly, the claim recites an abstract idea. The examiner notes that the computer as recited in the claims are being used for ranking trademarks by similarity using a computer (being used a generic tools). Therefore, the abstract idea recited in the claims is generally linking it to a computer environment, and does not integrate the abstract idea into a practical application. Analyzing trademarks for similarities does not improve the functioning of a computer. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-20 is/are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claims 1, 9, and 17 contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. There is no support for “resulting in the training data comprising an increased representation of trademark pairs that are marked as similar compared to the larger dataset ….”. Dependent claims 2-8, 10-16, and 18-20 is/are also rejected for inheriting the deficiencies of the independent claims from which they depend on. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The eligibility analysis in support of these findings is provided below, in accordance with the 2019 Revised Patent Subject Matter Eligibility Guidance, hereinafter 2019 PEG. Step 1. in accordance with Step 1 of the eligibility inquiry (as explained in MPEP 2106), it is first noted the claim method (claims 1-8), a system (claim 9-16), and non-transitory (17-20) are directed to one of the eligible categories of subject matter and therefore satisfies Step 1. Independent claim(s) 1, 9, 17 recites the following limitations directed towards a Mental Processes & Mathematical Concepts: obtaining a set of search results comprising trademark names having at least a minimum degree of similarity with the candidate trademark name (mental step of observing and/or evalutation of trademark data on a computer screen, computer is being used as a generic tool); for each trademark name in the set of search results, obtaining a goods/services similarity score indicating a level of similarity between goods/services information associated with the trademark name and goods/services information associated with the candidate trademark name, the goods/services similarity score generated by a goods/services scoring model trained in accordance with an unsupervised learning algorithm based on a corpus of training data relating to registered trademarks (The limitation recites a mathematical concept; calculating similarity scores); generating a set of combined scores based at least on the trademark name similarity scores and the goods/services similarity scores (The limitation recites a mathematical concept; calculating similarity scores). Step 2A. In accordance with Step 2A, prong two of the 2019 PEG, the judicial exception is not integrated into a practical application because of the recitation in claim(s) 1, 9, 17: at least one processor circuit (i.e., as a generic processor/component performing a generic computer function); and at least one memory that stores program code configured to be executed by the at least one processor circuit (i.e., as a generic processor/component performing a generic computer function), the program code is configured to, when executed by the at least one processor circuit, cause the system to: receiving information about a candidate trademark, the information including at least a candidate trademark name and goods/services information (recites insignificant extra solution activity of gathering trademark data); providing the candidate trademark name and the set of search results to a first trained model, the first trained model outputs, for each trademark name in the set of search results, a trademark name similarity score between the trademark name and the candidate trademark name, the first trained model is trained based at least on training data that comprises a subset of pairs of historical trademark names and a corresponding label stored in a database that marks each pair marked as either similar or dissimilar, the subset of pairs of the historical trademark names comprising a reduction in a ratio of false positive cases to true positive cases compared to a larger data set from which the subset of pairs is selected, wherein the false positive cases comprise trademark pairs in the training data that are marked as dissimilar and the true positive cases comprise trademark pairs in the training data that are marked as similar, resulting in the training data comprising an increased representation of trademark pairs that are marked as similar compared to the larger dataset (recites insignificant extra solution activity of outputting trademark data). providing a ranked list of the search results based at least on the set of combined scores (recites insignificant extra solution activity of outputting trademark data). Step 2B. Similar to the analysis under 2A Prong Two, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Because the additional elements of the independent claims amount to insignificant extra solution activity and/or mere instructions, the additional elements do not add significantly more to the judicial exception such that the independent claims as a whole would be patent eligible. Therefore, independent claims 1, 9, and 17 are rejected under 35 U.S.C. 101. With respect to claim(s) 2 and 10: Step 2A, prong one of the 2019 PEG: wherein the trademark name similarity score for each trademark name in the set of search results indicates a level of visual similarity between the trademark name and the candidate trademark name (The limitation recites a mathematical concept; calculating similarity scores). Step 2A Prong Two Analysis: This judicial exception is not integrated into a practical application because there are no additional elements to provide practical application. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. With respect to claim(s) 3 and 11: Step 2A, prong one of the 2019 PEG: wherein the trademark name similarity score for each trademark name in the set of search results indicates a level of auditive similarity between the trademark name and the candidate trademark name (The limitation recites a mathematical concept; calculating similarity scores). Step 2A Prong Two Analysis: inputting, by the processor, the training data set into the machine learning model (recites insignificant extrasolution activity of data gathering). Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. With respect to claim(s) 4, 12, and 19: Step 2A, prong one of the 2019 PEG: generating a fragment of the candidate trademark name (mental step of observing and/or evalutation of trademark data on a computer screen, computer is being used as a generic tool); generating, for each trademark name in the set of search results, a fragment of the trademark name (mental step of observing and/or evalutation of trademark data on a computer screen, computer is being used as a generic tool). Step 2A Prong Two Analysis: providing the fragment of the candidate trademark name and the fragment of the trademark name for each trademark name in the set of search results to the first trained model (recites insignificant extra solution activity of outputting trademark data). Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. With respect to claim(s) 5 and 13: Step 2A, prong one of the 2019 PEG: Examiner is of the position the dependent claim is directed toward additional elements. Step 2A Prong Two Analysis: wherein the first trained model outputs the trademark name similarity score for each trademark name in the set of search results based at least on a number of terms in the fragment of the candidate trademark name being deemed similar to a number of terms in the fragment of the trademark name (recites insignificant extra solution activity of outputting trademark data). Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. With respect to claim(s) 6 and 15: Step 2A, prong one of the 2019 PEG: Examiner is of the position the dependent claim is directed toward additional elements. Step 2A Prong Two Analysis: wherein the first trained model outputs the trademark name similarity score for each trademark name in the set of search results based at least on a number of terms in the fragment of the candidate trademark name being deemed dissimilar to a number of terms in the fragment of the trademark name (recites insignificant extra solution activity of outputting trademark data). Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. With respect to claim(s) 7: Step 2A, prong one of the 2019 PEG: assigning a weight to each term of the fragment of the candidate trademark name, the weight based at least on a frequency of the term in one or more goods/services classes, and wherein the set of combined scores is generated based at least on the trademark name similarity scores, the weights, and the goods/services similarity scores (mental step of observing and/or evalutation of trademark data on a computer screen, computer is being used as a generic tool). Step 2A Prong Two Analysis: This judicial exception is not integrated into a practical application because there are no additional elements to provide practical application. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. With respect to claim(s) 8 and 20: Step 2A, prong one of the 2019 PEG: Examiner is of the position the dependent claim is directed toward additional elements. Step 2A Prong Two Analysis: providing the candidate trademark name and the set of search results to a second trained model that outputs, for each trademark name in the set of search results, a semantic similarity score between the trademark name and the candidate trademark name, and wherein the set of combined scores is generated based at least on the trademark name similarity scores, the semantic similarity scores, and the goods/services similarity scores (recites insignificant extrasolution activity of outputting trademark data). Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. With respect to claim(s) 18: Step 2A, prong one of the 2019 PEG: wherein the trademark name similarity score for each trademark name in the set of search results indicates one of a level of visual similarity or a level of auditive similarity between the trademark name and the candidate trademark name (The limitation recites a mathematical concept; calculating similarity scores). Step 2A Prong Two Analysis: This judicial exception is not integrated into a practical application because there are no additional elements to provide practical application. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Keyagnaert et al. [1] (US Pub. No. 20180225363) and Keyagnaert et al. [2] (US Pub. No. 20200111021) and MAU et al. (US Pub. No. 20200401851) in further view of Wang et al. (US Pub. No. 20220046110). With respect to claim 1, Keyagnaert et al. [1] teaches a method for ranking trademark search results, comprising: receiving information about a candidate trademark, the information including at least a candidate trademark name and goods/services information (Paragraph 122 discloses the candidate retrieval engine 740 and the candidate presentation engine 750 can selectively retrieve and/or filter trademarks from the one or more source repositories 790); obtaining a set of search results comprising trademark names having at least a minimum degree of similarity with the candidate trademark name (Paragraph 123 discloses trademark results associated with a similarity measure, such as the prefix similarity measure); providing the candidate trademark name and the set of search results to a first trained model, the first trained model outputs, for each trademark name in the set of search results, a trademark name similarity score between the trademark name and the candidate trademark name, the first trained model is trained based at least on training data that comprises a subset of pairs of historical trademark names and a corresponding label stored in a database that marks each pair marked as either similar or dissimilar, the subset of pairs of the historical trademark names comprising a reduction in a ratio of false positive cases to true positive cases compared to a larger data set from which the subset of pairs is selected (Paragraph 162 discloses candidate presentation engine 850 can include a scoring module 852 and a filtering module 854, and can filter out false positives or irrelevant trademarks provided in the set of trademarks output by the candidate retrieval engine 840) ; for each trademark name in the set of search results, obtaining a goods/services similarity score indicating a level of similarity between goods/services information associated with the trademark name and goods/services information associated with the candidate trademark name, the goods/services similarity score generated by a goods/services scoring model trained in accordance with an unsupervised learning algorithm based on a corpus of training data relating to registered trademarks (Paragraph 137 discloses goods and/or services similarity engine 100 can generate goods and/or services similarity scores and those descriptions of goods and/or services that satisfy a threshold score can be used in addition to the reference description of goods and/or services to generate directives and/or queries for candidate retrieval); generating a set of combined scores based at least on the trademark name similarity scores and the goods/services similarity scores (Paragraph 165 discloses goods or services similarity measures and/or similarity scores generated by the goods and/or services similarity engine 100 can be integrated in the rules logic for certain jurisdictions to take this dimension into account when generating similarity scores for trademarks); and providing a ranked list of the search results based at least on the set of combined scores (Paragraph 167 discloses the results set to rank and/prioritize the search results in the presentation to a user). Keyagnaert et al. does not explicitly disclose for each trademark name in the set of search results, obtaining a goods/services similarity score indicating a level of similarity between goods/services information associated with the trademark name and goods/services information associated with the candidate trademark name, the goods/services similarity score generated by a goods/services scoring model trained in accordance with an unsupervised learning algorithm based on a corpus of training data relating to registered trademarks. However, Keyagnaert et al. [2] teaches for each trademark name in the set of search results, obtaining a goods/services similarity score indicating a level of similarity between goods/services information associated with the trademark name and goods/services information associated with the candidate trademark name, the goods/services similarity score generated by a goods/services scoring model trained in accordance with an unsupervised learning algorithm based on a corpus of training data relating to registered trademarks (Paragraph 48 discloses unsupervised learning algorithms) Therefore, it would have been obvious at the time the invention was made to a person having ordinary skill in the art to modify over Keyagnaert et al. [1] with Keyagnaert et al. [2] to include for each trademark name in the set of search results, obtaining a goods/services similarity score indicating a level of similarity between goods/services information associated with the trademark name and goods/services information associated with the candidate trademark name, the goods/services similarity score generated by a goods/services scoring model trained in accordance with an unsupervised learning algorithm based on a corpus of training data relating to registered trademarks. This would have provided machine learning which would have improved similarity scoring for trademarks. See Keyagnaert et al. [2] Paragraphs 3-13. Keyagnaert et al. [1] as modified by Keyagnaert et al. [2] does not include wherein the false positive cases comprise trademark pairs in the training data that are marked as dissimilar and the true positive cases comprise trademark pairs in the training data that are marked as similar. However, MAU et al. discloses wherein the false positive cases comprise trademark pairs in the training data that are marked as dissimilar and the true positive cases comprise trademark pairs in the training data that are marked as similar (Paragraph 68 discloses labels can be used to train classification models on hierarchically labelled classes as mentioned above, and the trained classification models may be used to generate code suggestions for examiners during new registrations and help to find similar design/trademark images by providing the image or an object in the image desired to be registered to the classification system described herein and Paragraph 52 discloses false positive and false negative confidence scores). Therefore, it would have been obvious at the time the invention was made to a person having ordinary skill in the art to modify over Keyagnaert et al. [1] and Keyagnaert et al. [2] with MAU et al. to include wherein the false positive cases comprise trademark pairs in the training data that are marked as dissimilar and the true positive cases comprise trademark pairs in the training data that are marked as similar. This would have provided machine learning which would have improved similarity scoring for trademarks. See MAU et al. Paragraphs 3-8. Keyagnaert et al. [1] as modififed by Keyagnaert et al. [2] and MAU et al. does not disclose resulting in the training data comprising an increased representation of trademark pairs that are marked as similar compared to the larger dataset. However, Wang et al. teaches resulting in the training data comprising an increased representation of trademark pairs that are marked as similar compared to the larger dataset (Paragraph 71 discloses A training data fetching API 408 may be used to provide access to the trademark data. Training datasets may be created depending on one or more criteria including in a request. Each training dataset may include the data referenced in one or more changesets). Therefore, it would have been obvious at the time the invention was made to a person having ordinary skill in the art to modify over Keyagnaert et al. [1] and Keyagnaert et al. [2] and MAU et al. with Wang et al. to include resulting in the training data comprising an increased representation of trademark pairs that are marked as similar compared to the larger dataset. This would have provided machine learning which would have improved similarity scoring for trademarks. See Wang et al. Paragraphs 2-4. The Keyagnaert et al. [1] reference as modified by Keyagnaert et al. [2] and MAU et al. and Wang et al. teaches all the limitations of claim 1. Regarding claim 2, Keyagnaert et al. [1] discloses the method of claim 1, wherein the trademark name similarity score for each trademark name in the set of search results indicates a level of visual similarity between the trademark name and the candidate trademark name (Paragraph 278 discloses he GUI 3300 can list various similarity measures generated by the trademark similarity engine including, for example, “Visual Similarity”, which indicates that the trademark search string looks similar to the trademark returned by the search. “Auditive Similar”). The Keyagnaert et al. [1] reference as modified by Keyagnaert et al. [2] and MAU et al. and Wang et al. teaches all the limitations of claim 1. Regarding claim 3, Keyagnaert et al. [1] discloses the method of claim 1, wherein the trademark name similarity score for each trademark name in the set of search results indicates a level of auditive similarity between the trademark name and the candidate trademark name (Paragraph 278 discloses he GUI 3300 can list various similarity measures generated by the trademark similarity engine including, for example, “Visual Similarity”, which indicates that the trademark search string looks similar to the trademark returned by the search. “Auditive Similar”). The Keyagnaert et al. [1] reference as modified by Keyagnaert et al. [2] and MAU et al. and Wang et al. teaches all the limitations of claim 1. Regarding claim 4, Keyagnaert et al. [1] discloses the method of claim 1, further comprising: generating a fragment of the candidate trademark name (Paragraph 112 discloses trademark similarity score can be based on, for example, similarity measures (prefix, suffix, infix, string edit distance, change-add-delete (CAD), combinations of prefix, suffix, infix, foreign letters, vowel patterns, and/or any other suitable similarity measures or combinations of similarity measures) on visual, phonetic, semantic, translation, morphological and transliterated representations of the trademarks returned by the search); generating, for each trademark name in the set of search results, a fragment of the trademark name (Paragraph 112 discloses trademark similarity score can be based on, for example, similarity measures (prefix, suffix, infix, string edit distance, change-add-delete (CAD), combinations of prefix, suffix, infix, foreign letters, vowel patterns, and/or any other suitable similarity measures or combinations of similarity measures) on visual, phonetic, semantic, translation, morphological and transliterated representations of the trademarks returned by the search); and providing the fragment of the candidate trademark name and the fragment of the trademark name for each trademark name in the set of search results to the first trained model (Paragraph 141 discloses creating or building the queries and the candidate presentation engine should include a wide variation of trademarks in the results that are semantically and/or phonetically similar to the input string). The Keyagnaert et al. [1] reference as modified by Keyagnaert et al. [2] and MAU et al. and Wang et al. teaches all the limitations of claim 4. Regarding claim 5, Keyagnaert et al. [1] discloses the method of claim 4, wherein the first trained model outputs the trademark name similarity score for each trademark name in the set of search results based at least on a number of terms in the fragment of the candidate trademark name being deemed similar to a number of terms in the fragment of the trademark name (Paragraph 112 discloses trademark similarity engine 640 to generate a trademark similarity score. The trademark similarity score can be based on, for example, similarity measures (prefix, suffix, infix, string edit distance, change-add-delete (CAD), combinations of prefix, suffix, infix, foreign letters, vowel patterns, and/or any other suitable similarity measures or combinations of similarity measures) on visual, phonetic, semantic, translation, morphological and transliterated representations of the trademarks returned by the search). The Keyagnaert et al. [1] reference as modified by Keyagnaert et al. [2] and MAU et al. and Wang et al. teaches all the limitations of claim 4. Regarding claim 6, Keyagnaert et al. [1] discloses the method of claim 4, wherein the first trained model outputs the trademark name similarity score for each trademark name in the set of search results based at least on a number of terms in the fragment of the candidate trademark name being deemed dissimilar to a number of terms in the fragment of the trademark name (Paragraph 112 discloses trademark similarity engine 640 to generate a trademark similarity score. The trademark similarity score can be based on, for example, similarity measures (prefix, suffix, infix, string edit distance, change-add-delete (CAD), combinations of prefix, suffix, infix, foreign letters, vowel patterns, and/or any other suitable similarity measures or combinations of similarity measures) on visual, phonetic, semantic, translation, morphological and transliterated representations of the trademarks returned by the search). The Keyagnaert et al. [1] reference as modified by Keyagnaert et al. [2] and MAU et al. and Wang et al. teaches all the limitations of claim 4. Regarding claim 7, Keyagnaert et al. [1] discloses the method of claim 4, further comprising: assigning a weight to each term of the fragment of the candidate trademark name, the weight based at least on a frequency of the term in one or more goods/services classes, and wherein the set of combined scores is generated based at least on the trademark name similarity scores, the weights, and the goods/services similarity scores (Paragraph 99 discloses Each similarity score that forms the aggregate can be assigned a weighting factor to emphasis some of the similarity scores contributions to overall similarity score and to de-emphasize some of the similarity scores contributions to the overall similarity score). The Keyagnaert et al. [1] reference as modified by Keyagnaert et al. [2] and MAU et al. and Wang et al. teaches all the limitations of claim 1. Regarding claim 8, Keyagnaert et al. [1] discloses the method of claim 1, further comprising: providing the candidate trademark name and the set of search results to a second trained model that outputs, for each trademark name in the set of search results, a semantic similarity score between the trademark name and the candidate trademark name, and wherein the set of combined scores is generated based at least on the trademark name similarity scores, the semantic similarity scores, and the goods/services similarity scores (Paragraph 141 discloses the candidate retrieval engine 840 should include as many variation of the input string as possible (e.g., phonetic and semantic variations) when creating or building the queries and the candidate presentation engine should include a wide variation of trademarks in the results that are semantically and/or phonetically similar to the input string and 41 discloses trigger specific models). With respect to claim 9, Keyagnaert et al. [1] teaches a system for ranking trademark search results, comprising: at least one processor circuit (Paragraph 231 discloses a processor); and at least one memory that stores program code configured to be executed by the at least one processor circuit (Paragraph 231 discloses a memory), the program code is configured to, when executed by the at least one processor circuit, cause the system to: receive information about a candidate trademark, the information including at least a candidate trademark name and goods/services information (Paragraph 122 discloses the candidate retrieval engine 740 and the candidate presentation engine 750 can selectively retrieve and/or filter trademarks from the one or more source repositories 790); obtain a set of search results comprising trademark names having at least a minimum degree of similarity with the candidate trademark name (Paragraph 123 discloses trademark results associated with a similarity measure, such as the prefix similarity measure); provide the candidate trademark name and the set of search results to a first trained model, the first trained model outputs, for each trademark name in the set of search results, a trademark name similarity score between the trademark name and the candidate trademark name, the first trained model is trained based at least on training data that comprises a subset of pairs of historical trademark names and a corresponding label stored in a database that marks each pair marked as either similar or dissimilar, the subset of pairs of the historical trademark names comprising a reduction in a ratio of false positive cases to true positive cases compared to a larger data set from which the subset of pairs is selected (Paragraph 162 discloses candidate presentation engine 850 can include a scoring module 852 and a filtering module 854, and can filter out false positives or irrelevant trademarks provided in the set of trademarks output by the candidate retrieval engine 840); for each trademark name in the set of search results, obtain a goods/services similarity score indicating a level of similarity between goods/services information associated with the trademark name and goods/services information associated with the candidate trademark name, the goods/services similarity score generated by a goods/services scoring model trained in accordance with an unsupervised learning algorithm based on a corpus of training data relating to registered trademarks (Paragraph 137 discloses goods and/or services similarity engine 100 can generate goods and/or services similarity scores and those descriptions of goods and/or services that satisfy a threshold score can be used in addition to the reference description of goods and/or services to generate directives and/or queries for candidate retrieval); generate a set of combined scores based at least on the trademark name similarity scores and the goods/services similarity scores (Paragraph 165 discloses goods or services similarity measures and/or similarity scores generated by the goods and/or services similarity engine 100 can be integrated in the rules logic for certain jurisdictions to take this dimension into account when generating similarity scores for trademarks); and provide a ranked list of the search results based at least on the set of combined scores (Paragraph 167 discloses the results set to rank and/prioritize the search results in the presentation to a user). Keyagnaert et al. does not explicitly disclose for each trademark name in the set of search results, obtaining a goods/services similarity score indicating a level of similarity between goods/services information associated with the trademark name and goods/services information associated with the candidate trademark name, the goods/services similarity score generated by a goods/services scoring model trained in accordance with an unsupervised learning algorithm based on a corpus of training data relating to registered trademarks. However, Keyagnaert et al. [2] teaches for each trademark name in the set of search results, obtain a goods/services similarity score indicating a level of similarity between goods/services information associated with the trademark name and goods/services information associated with the candidate trademark name, the goods/services similarity score generated by a goods/services scoring model trained in accordance with an unsupervised learning algorithm based on a corpus of training data relating to registered trademarks (Paragraph 48 discloses unsupervised learning algorithms) Therefore, it would have been obvious at the time the invention was made to a person having ordinary skill in the art to modify over Keyagnaert et al. [1] with Keyagnaert et al. [2] to include for each trademark name in the set of search results, obtaining a goods/services similarity score indicating a level of similarity between goods/services information associated with the trademark name and goods/services information associated with the candidate trademark name, the goods/services similarity score generated by a goods/services scoring model trained in accordance with an unsupervised learning algorithm based on a corpus of training data relating to registered trademarks. This would have provided machine learning which would have improved similarity scoring for trademarks. See Keyagnaert et al. [2] Paragraphs 3-13. Keyagnaert et al. [1] as modified by Keyagnaert et al. [2] does not include wherein the false positive cases comprise trademark pairs in the training data that are marked as dissimilar and the true positive cases comprise trademark pairs in the training data that are marked as similar. However, MAU et al. discloses wherein the false positive cases comprise trademark pairs in the training data that are marked as dissimilar and the true positive cases comprise trademark pairs in the training data that are marked as similar (Paragraph 68 discloses labels can be used to train classification models on hierarchically labelled classes as mentioned above, and the trained classification models may be used to generate code suggestions for examiners during new registrations and help to find similar design/trademark images by providing the image or an object in the image desired to be registered to the classification system described herein and Paragraph 52 discloses false positive and false negative confidence scores). Therefore, it would have been obvious at the time the invention was made to a person having ordinary skill in the art to modify over Keyagnaert et al. [1] and Keyagnaert et al. [2] with MAU et al. to include wherein the false positive cases comprise trademark pairs in the training data that are marked as dissimilar and the true positive cases comprise trademark pairs in the training data that are marked as similar. This would have provided machine learning which would have improved similarity scoring for trademarks. See MAU et al. Paragraphs 3-8. Keyagnaert et al. [1] as modififed by Keyagnaert et al. [2] and MAU et al. does not disclose resulting in the training data comprising an increased representation of trademark pairs that are marked as similar compared to the larger dataset. However, Wang et al. teaches resulting in the training data comprising an increased representation of trademark pairs that are marked as similar compared to the larger dataset (Paragraph 71 discloses A training data fetching API 408 may be used to provide access to the trademark data. Training datasets may be created depending on one or more criteria including in a request. Each training dataset may include the data referenced in one or more changesets). Therefore, it would have been obvious at the time the invention was made to a person having ordinary skill in the art to modify over Keyagnaert et al. [1] and Keyagnaert et al. [2] and MAU et al. with Wang et al. to include resulting in the training data comprising an increased representation of trademark pairs that are marked as similar compared to the larger dataset. This would have provided machine learning which would have improved similarity scoring for trademarks. See Wang et al. Paragraphs 2-4. With respect to claim 10, it is rejected on grounds corresponding to above rejected claim 2, because claim 10 is substantially equivalent to claim 2. With respect to claim 11, it is rejected on grounds corresponding to above rejected claim 3, because claim 11 is substantially equivalent to claim 3. With respect to claim 12, it is rejected on grounds corresponding to above rejected claim 4, because claim 12 is substantially equivalent to claim 4. With respect to claim 13, it is rejected on grounds corresponding to above rejected claim 5, because claim 13 is substantially equivalent to claim 5. With respect to claim 14, it is rejected on grounds corresponding to above rejected claim 6, because claim 14 is substantially equivalent to claim 6. With respect to claim 15, it is rejected on grounds corresponding to above rejected claim 7, because claim 15 is substantially equivalent to claim 7. With respect to claim 16, it is rejected on grounds corresponding to above rejected claim 8, because claim 16 is substantially equivalent to claim 8. With respect to claim 17, Keyagnaert et al. [1] teaches a non-transitory computer-readable storage medium having program instructions recorded thereon that, when executed by at least one processor, perform a method comprising: receiving information about a candidate trademark, the information including at least a candidate trademark name and goods/services information (Paragraph 122 discloses the candidate retrieval engine 740 and the candidate presentation engine 750 can selectively retrieve and/or filter trademarks from the one or more source repositories 790); obtaining a set of search results comprising trademark names having at least a minimum degree of similarity with the candidate trademark name (Paragraph 123 discloses trademark results associated with a similarity measure, such as the prefix similarity measure); providing the candidate trademark name and the set of search results to a first trained model, the first trained model outputs, for each trademark name in the set of search results, a trademark name similarity score between the trademark name and the candidate trademark name, the first trained model is trained based at least on training data that comprises a subset of pairs of historical trademark names and a corresponding label stored in a database that marks each pair marked as either similar or dissimilar, the subset of pairs of the historical trademark names comprising a reduction in a ratio of false positive cases to true positive cases compared to a larger data set from which the subset of pairs is selected (Paragraph 162 discloses candidate presentation engine 850 can include a scoring module 852 and a filtering module 854, and can filter out false positives or irrelevant trademarks provided in the set of trademarks output by the candidate retrieval engine 840); for each trademark name in the set of search results, obtaining a goods/services similarity score indicating a level of similarity between goods/services information associated with the trademark name and goods/services information associated with the candidate trademark name, the goods/services similarity score generated by a goods/services scoring model trained in accordance with an unsupervised learning algorithm based on a corpus of training data relating to registered trademarks (Paragraph 137 discloses goods and/or services similarity engine 100 can generate goods and/or services similarity scores and those descriptions of goods and/or services that satisfy a threshold score can be used in addition to the reference description of goods and/or services to generate directives and/or queries for candidate retrieval); generating a set of combined scores based at least on the trademark name similarity scores and the goods/services similarity scores (Paragraph 165 discloses goods or services similarity measures and/or similarity scores generated by the goods and/or services similarity engine 100 can be integrated in the rules logic for certain jurisdictions to take this dimension into account when generating similarity scores for trademarks); and providing a ranked list of the search results based at least on the set of combined scores (Paragraph 167 discloses the results set to rank and/prioritize the search results in the presentation to a user). Keyagnaert et al. does not explicitly disclose for each trademark name in the set of search results, obtaining a goods/services similarity score indicating a level of similarity between goods/services information associated with the trademark name and goods/services information associated with the candidate trademark name, the goods/services similarity score generated by a goods/services scoring model trained in accordance with an unsupervised learning algorithm based on a corpus of training data relating to registered trademarks. However, Keyagnaert et al. [2] teaches for each trademark name in the set of search results, obtaining a goods/services similarity score indicating a level of similarity between goods/services information associated with the trademark name and goods/services information associated with the candidate trademark name, the goods/services similarity score generated by a goods/services scoring model trained in accordance with an unsupervised learning algorithm based on a corpus of training data relating to registered trademarks (Paragraph 48 discloses unsupervised learning algorithms) Therefore, it would have been obvious at the time the invention was made to a person having ordinary skill in the art to modify over Keyagnaert et al. [1] with Keyagnaert et al. [2] to include for each trademark name in the set of search results, obtaining a goods/services similarity score indicating a level of similarity between goods/services information associated with the trademark name and goods/services information associated with the candidate trademark name, the goods/services similarity score generated by a goods/services scoring model trained in accordance with an unsupervised learning algorithm based on a corpus of training data relating to registered trademarks. This would have provided machine learning which would have improved similarity scoring for trademarks. See Keyagnaert et al. [2] Paragraphs 3-13. Keyagnaert et al. [1] as modified by Keyagnaert et al. [2] does not include wherein the false positive cases comprise trademark pairs in the training data that are marked as dissimilar and the true positive cases comprise trademark pairs in the training data that are marked as similar. However, MAU et al. discloses wherein the false positive cases comprise trademark pairs in the training data that are marked as dissimilar and the true positive cases comprise trademark pairs in the training data that are marked as similar (Paragraph 68 discloses labels can be used to train classification models on hierarchically labelled classes as mentioned above, and the trained classification models may be used to generate code suggestions for examiners during new registrations and help to find similar design/trademark images by providing the image or an object in the image desired to be registered to the classification system described herein and Paragraph 52 discloses false positive and false negative confidence scores). Therefore, it would have been obvious at the time the invention was made to a person having ordinary skill in the art to modify over Keyagnaert et al. [1] and Keyagnaert et al. [2] with MAU et al. to include wherein the false positive cases comprise trademark pairs in the training data that are marked as dissimilar and the true positive cases comprise trademark pairs in the training data that are marked as similar. This would have provided machine learning which would have improved similarity scoring for trademarks. See MAU et al. Paragraphs 3-8. Keyagnaert et al. [1] as modififed by Keyagnaert et al. [2] and MAU et al. does not disclose resulting in the training data comprising an increased representation of trademark pairs that are marked as similar compared to the larger dataset. However, Wang et al. teaches resulting in the training data comprising an increased representation of trademark pairs that are marked as similar compared to the larger dataset (Paragraph 71 discloses A training data fetching API 408 may be used to provide access to the trademark data. Training datasets may be created depending on one or more criteria including in a request. Each training dataset may include the data referenced in one or more changesets). Therefore, it would have been obvious at the time the invention was made to a person having ordinary skill in the art to modify over Keyagnaert et al. [1] and Keyagnaert et al. [2] and MAU et al. with Wang et al. to include resulting in the training data comprising an increased representation of trademark pairs that are marked as similar compared to the larger dataset. This would have provided machine learning which would have improved similarity scoring for trademarks. See Wang et al. Paragraphs 2-4. The Keyagnaert et al. [1] reference as modified by Keyagnaert et al. [2] and MAU et al. teaches all the limitations of claim 17. Regarding claim 18, Keyagnaert et al. [1] discloses the non-transitory computer-readable medium of claim 17, wherein the trademark name similarity score for each trademark name in the set of search results indicates one of a level of visual similarity or a level of auditive similarity between the trademark name and the candidate trademark name (Paragraph 278 discloses he GUI 3300 can list various similarity measures generated by the trademark similarity engine including, for example, “Visual Similarity”, which indicates that the trademark search string looks similar to the trademark returned by the search. “Auditive Similar”). With respect to claim 19, it is rejected on grounds corresponding to above rejected claim 4, because claim 19 is substantially equivalent to claim 4. With respect to claim 20, it is rejected on grounds corresponding to above rejected claim 8, because claim 20 is substantially equivalent to claim 8. Relevant Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US Patent 8667609 is directed to System To Inform About Trademarks Similar To Provided Input: Column 3 Lines 49-65 various embodiments of the present invention relate to systems and methods for informing requesters about trademarks similar to a provided input (e.g., a domain name, an e-mail address, instant messaging user name, or an advertisement). For example, some embodiments include receiving, from a requester, a request to perform a trademark search for a provided input. The requester, for example, can be an individual, an organization, a service provided, etc. One or more databases can be searched for trademarks related to the provided input. In some embodiments, for example, the provided input can be directly used for searching the databases. In other cases, the provided input can be processed or parsed into phrases for searching the databases. Based on the returned trademarks, a relevance score for each trademark can be determined. The relevance score can be based on a variety of inputs and factors such as status of a trademark, filing date, first use date, litigation history, comparison of potential classes with classes assigned to the trademarks, and others. Then, a notification can be generated based at least in part on the relevance score. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS E ALLEN whose telephone number is (571)270-3562. The examiner can normally be reached Monday through Thursday 830-630. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached at (571) 270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /N.E.A/Examiner, Art Unit 2154 /BORIS GORNEY/Supervisory Patent Examiner, Art Unit 2154
Read full office action

Prosecution Timeline

Oct 17, 2022
Application Filed
Dec 13, 2023
Non-Final Rejection — §101, §103, §112
Mar 19, 2024
Response Filed
Jun 26, 2024
Final Rejection — §101, §103, §112
Nov 05, 2024
Request for Continued Examination
Nov 14, 2024
Response after Non-Final Action
Feb 05, 2025
Non-Final Rejection — §101, §103, §112
Jun 04, 2025
Examiner Interview Summary
Jun 04, 2025
Applicant Interview (Telephonic)
Jun 10, 2025
Response Filed
Sep 16, 2025
Final Rejection — §101, §103, §112
Nov 19, 2025
Response after Non-Final Action
Dec 19, 2025
Request for Continued Examination
Jan 08, 2026
Response after Non-Final Action
Mar 19, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12380068
RECENT FILE SYNCHRONIZATION AND AGGREGATION METHODS AND SYSTEMS
2y 5m to grant Granted Aug 05, 2025
Patent 12339822
METHOD AND SYSTEM FOR MIGRATING CONTENT BETWEEN ENTERPRISE CONTENT MANAGEMENT SYSTEMS
2y 5m to grant Granted Jun 24, 2025
Patent 12321704
COMPOSITE EXTRACTION SYSTEMS AND METHODS FOR ARTIFICIAL INTELLIGENCE PLATFORM
2y 5m to grant Granted Jun 03, 2025
Patent 12271379
CROSS-DATABASE JOIN QUERY
2y 5m to grant Granted Apr 08, 2025
Patent 12259876
SYSTEM AND METHOD FOR A HYBRID CONTRACT EXECUTION ENVIRONMENT
2y 5m to grant Granted Mar 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
77%
Grant Probability
93%
With Interview (+16.2%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 760 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month