DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 1 to 4, 6, 8 to 13, 15 to 18, and 20 are objected to because of the following informalities:
Independent claims 1, 11, and 16 set forth a limitation “wherein each NER comprises a respective algorithm specialized for different annotations”, but “each NER” lacks express antecedent basis. These independent claims have a prior recitation of “a set of Named Entity Recognition (NER) annotators” and “each NER annotator”, but not “each NER”. Applicants can overcome this objection by changing “wherein each NER comprises a respective algorithm specialized for different annotations” to “wherein each NER annotator comprises a respective algorithm specialized for different annotations”,
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 to 4, 6, 9 to 13, 15 to 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Biddle et al. (U.S. Patent Publication 2020/0175106) in view of Farre Guiu et al. (U.S. Patent Publication 2022/0245554).
Concerning independent claims 1, 11, and 16, Biddle et al. discloses a method, system, and computer program product for supervised machine learning models of documents, comprising:
“receiving a document containing unstructured data” – a system obtains annotated versions of a set of documents (Abstract); supervised machine learning is useful for analyzing a large set of documents (¶[0001]); a method comprises obtaining a machine learning model of a set of documents (¶[0004]); annotation management system 400 may receive annotated versions of the documents (¶[0045]: Figure 3); implicitly, documents comprise text data that is “unstructured”; that is, documents conventionally include unstructured text of sentences in natural language;
“analyzing the document using a set of Named Entity Recognition (NER) annotators, each NER annotator generating an annotated entity for a mention of the document according to their respective different capabilities, wherein each of the annotated entities comprises the mention and an assigned entity type” – a system obtains annotated versions of the documents, the documents being annotated by annotators (“a set of Named Entity Recognition (NER) annotators”) (Abstract); annotators may have expertise in the domain of the respective documents (“according to their respective capabilities”) (¶[0001]); a system comprises an annotation component configured to obtain annotated versions of the documents, the documents being annotated by annotators (¶[0007]); annotators may be subject matter experts (SMEs) in the topic of the document (“according to their respective capabilities”) (¶[0016]); during the annotation process, different annotators annotating a set of documents may have different interpretations of text within documents for any of a number of reasons; two annotators may submit two annotations for one or more entities or relations that are inconsistent with each other (“each annotator generating an annotated entity for a mention of the document”) (¶[0017]); here, an annotation of an entity includes an entity (“a mention”) and an annotation (“an assigned entity type”);
“responsive to the NER annotators generating annotated entities comprising different assigned entity types for the mention, resolving, for the annotated entities, a target entity type to form a set of first resolved entities, a target entity type to form a set of first resolved annotated entities” – a conflict between a plurality of annotations of the annotated versions of the documents is identified; a machine learning model includes a set of entities and relations defining relationships between entities; the identified conflict is resolved by at least one of identifying the correct annotation between the conflicting options (Abstract); a method further comprises identifying a conflict between a plurality of annotations of the annotated versions of the documents, the conflict relating to a part of text that maps to entity mentions or relations between entities that belong to the machine learning model (¶[0004]); a method also comprises resolving the identified conflict; resolving the identified conflict comprises at least one of: identifying the correct annotation between the conflicting options, and changing an annotation of the annotated version of the document (¶[0006]); conflicting annotations covering text and entity type data is extracted from the annotation agreement process (¶[0055]);
“wherein resolving the target entity type comprises leveraging information in a knowledge base to harmonize the different assigned entity types by changing at least one of the assigned entity types in one of the annotated entities to the target entity type” – an identified conflict is resolved by at least one of identifying the correct annotation between the conflicting options, splitting the annotated text into two separate entities or relations, and/or changing an annotation of the annotated version of the document (Abstract); annotation management system 400 may be configured to identify a pattern of conflicts, and, based on the identified pattern of conflicts, change one or more annotations of the annotated version of the document (¶[0031]: Figure 4); annotation management system 400 may then change an annotation of the annotated version of the document; changing the annotation may include changing the annotation to be consistent with the resolution (“to harmonize the different assigned entity types”) (¶[0048]: Figure 3: Step 348); annotation management system 400 may map extracted topics to a publicly available ontology; once extracted, the annotation management system 400 may compare the topics to the machine learning model to determine if the topic is relevant (¶[0057]: Figure 4); here, an ontology or a hierarchy is “information in a knowledge base”; changing an annotation so that the annotations are consistent to resolve a conflict by referring to an ontology is “leveraging information in a knowledge base to harmonize the different assigned entity types by changing at least one of the assigned entity types in one of the annotated entities to the target entity type”;
“wherein the changing is based on: the target entity type being more specific than at least one of the assigned entity types according to the knowledge base; and a length of mention information of the target entity type being larger than the length of mention information of the at least one of the assigned entity types according to the knowledge base” – an identified conflict is resolved by at least one of identifying the correct annotation between the conflicting options, splitting the annotation between the conflicting option, or generating a new entity at the same or a less specific hierarchical level as the entities or relation in conflict (“the target entity type being more specific than at least one of the assigned entity types”) (Abstract); Annotation Agreement Manager may determine to replace the entities with new entities on a more generic higher hierarchical level (¶[0021]); hierarchical levels may relate to a specificity of the entity, where relatively more specific entities that correspond to less words or phrases may be on a ‘lower’ hierarchical level and relatively less specific entities that correspond to more words or phrases may be on a ‘higher’ hierarchical level (“a length of mention information of the target entity type being larger than the length of mention information of the at least one of the assigned entity types”) (¶[0027]: Figure 4); annotation management system 400 may generate a new entity at the same or less specific, e.g., higher, hierarchical level as the entities or relation in conflict generating a new entity or relation for the conflict in question at the same or less specific hierarchical level as the entities or relation in conflict may include annotating text as a new entity or relation (¶[0048]: Figure 3: Step 346); here, “a length of mention” is construed as a quantity or number of words or phrases of the entity, so that more words or phrases for an entity corresponds to “a length of mention information . . . being larger”; Compare Specification, ¶[0031];
“after forming the set of first resolved entities, associating a computed score to each entity in the set of first resolved entities using the information in the knowledge base” – the annotation management system 400 may be configured to generate an accuracy score that indicates a level of annotation accuracy of an annotator; the accuracy score may relate to a given entity or relation based on the identified pattern of conflicts; further, the generated accuracy score may be associated with the entities or relations annotated by the annotator (¶[0032]: Figure 4); the annotation management system 400 may be configured to calculate an annotator score, e.g., accuracy score; annotation management system 400 may determine a score for each annotator (¶[0061]: Figure 4); the annotation management system 400 may generate an accuracy score that indicates a level of annotation accuracy for a respective annotator based on the identified pattern of conflicts (¶[0064]: Figure 4);
“resolving the set of first resolved entities using the associated computed score for each entity and the information in the knowledge base to create a set of final entities” – resolving the identified conflict may then be based on the accuracy score associated with the entities or relations annotated by the annotator (¶[0032]: Figure 4); annotation management system 400 may generate an accuracy score that indicates a level of annotation accuracy for a respective annotator based on the identified pattern of conflicts (¶[0061]: Figure 4); annotation management system 400 may be configured to resolve identified conflicts based on one or more accuracy scores associated with the entities or relations annotated by one or more annotators of the conflict (¶[0064]: Figure 4).
Concerning independent claims 1, 11, and 16, Biddle et al. discloses all of the limitations with an exception of “wherein each NER comprises a respective algorithm specialized for different annotations”, “the target entity being more specific than the at least one of the assigned entity types according to the knowledge base”, and “wherein the length of mention information is larger when an associated mention is part of a larger entity mention”. Here, Biddle et al. discloses a plurality of annotators which may be subject matter experts, where different annotators may be human users who may have different interpretations of annotations. (¶[0115] - ¶[0017]) Specifically, Biddle et al. does not expressly disclose that NER annotators are comprise “respective algorithms” if these annotators are human subject matter experts. Moreover, Biddle et al. does not expressly disclose that a target entity type is “more specific than” assigned entity types, and does not expressly disclose that length of mention information is larger “when an associated mention is part of a larger entity mention”. Biddle et al. provides an ontology that includes hierarchical levels to resolve a conflict between assigned entities with hierarchical levels relating to a specificity of the entity, where relatively more specific entities that include less words or phrases may be on a ‘lower’ hierarchical level and relatively less specific entities that include more words or phrases may be on a ‘higher’ hierarchical level. (¶[0027] and ¶[0057]) A hierarchical level represents if an entity type of a target entity type is “more specific than” an assigned entity type, and more or less words or phrases of an entity correspond to “a length of mention”. However, Biddle et al. does not clearly disclose “wherein the changing is based on: the target entity being more specific than the at least one of the assigned entity types according to the knowledge base” or that “the length of mention is larger when an associated mention is part of a larger entity mention”. Instead, Biddle et al. provides an embodiment of changing an annotation by going up a level in the hierarchy to change an entity type of apples, bananas, and pears to create a new more generic annotation for an entity of ‘fruit’. This entity type of ‘fruit’ is actually less specific than the assigned entity types. Still, patent case law has held that mere reversal of parts is evidence of obviousness. See MPEP §2144/04 VI. A. and In re Gazda, 219 F.2d 449, 104 USPQ 400 (CCPA 1955). Given a hierarchy of entity types to change an assigned entity type to a target entity type in Biddle et al., there are only a finite set of alternative ways to change an entity type to a target entity type, i.e., to change an entity type so that it is either more specific or less specific.
Concerning independent claims 1, 11, and 16, Farre Guiu et al. teaches receiving annotation data identifying content, and applying annotation tags to the content, to perform an evaluation of a tagging process of the annotation tags. (Abstract) Farre Guiu et al. teaches that tagging is traditionally performed manually by human taggers, but various automated systems for performing content tagging and quality assurance (QA) review have been developed, too. (¶[0002]) Specifically, Farre Guiu et al. teaches that both a human tagger 120a and an automated content annotation system 120b comprise tagging performance evaluation system 100. (¶[0014]: Figure 1) Automated content annotation system 120b may implement a machine learning model (“a respective algorithm”), e.g., a neural network (NN) trained to apply annotations to content. (¶[0030]: Figure 2) Here, Farre Guiu et al.’s automated annotator system 120b includes “a respective algorithm” corresponding to each of a plurality of different human annotators of Biddle et al. Moreover, Farre Guiu et al. teaches an embodiment of a circumstance in which a high number of or percentage of annotation tags 122 applied to content 116 by a human tagger 120a or automated content annotation system 120b are corrected, and where the corrected tags are overly generic, reports 428A to human tagger 120a or automated content annotation system 120b may read: ‘Your QA reviewer suggests that you use more specific tags of Cattleman’s Ranch Steakhouse or Huang family house instead of the generic ‘house’ when possible.’ (¶[0039]: Figure 4A) Farre Guiu et al., then, teaches that an annotation should be changed so “the target entity type being more specific than at least one of the assigned entity types”. Additionally, Farre Guiu et al. teaches that “length of mention information is larger when an associated mention is part of a larger entity mention” for ‘Cattleman’s Ranch Steakhouse’ and ‘house’. That is, a ‘target entity type’ of ‘Cattleman’s Ranch Steakhouse’ has a larger length of mention information as compared to ‘house’ because ‘house’ “is part of a larger entity mention” of ‘Cattleman’s Ranch Steakhouse’, which is the preferred ‘target entity type’. An objective is to improve performance of tagging and quality assurance review processes performed as part of content annotation. (¶[0002]) It would have been obvious to one having ordinary skill in the art to perform annotations by a plurality of annotators with different capabilities of Biddle et al. for automated annotators comprising specialized algorithms so that annotations are performed with more specific annotation tags as taught by Farre Guiu et al. for a purpose of improving performance of annotation tagging and review in content annotation.
Concerning claims 2, 12, and 17, Farre Guiu et al. teaches a human quality assurance (QA) reviewer 120a (“a user”). (¶[0014]: Figure 1) Tagging performance evaluation systems and methods enable annotation administrators to appraise a taxonomy (“the knowledge base”) of tags used for content annotation, and based on this appraisal, annotation administrators may identify changes to the taxonomy for reducing errors due to tag confusion (“to automatically update the information in the knowledge base”). (¶[0022]) Corrections to annotation tags 122 may be made by one or more QA entities in the form of human QA reviewer 124a or automated QA system 124b. An evaluation of the tagging process is performed resulting in one or more corrections identified in annotation data 126. (¶[0032]: Figure 2: Step 242) One or more parameters for improving the tagging process may be identified. (¶[0034]: Figure 2: Step 243) Here, identifying by a human quality reviewer one or more corrections and parameters for improving an annotation is “user feedback information relating to the set of final entities”.
Concerning claim 3, Farre Guiu et al. teaches an evaluation of a tagging process resulting in an assessment of a correction process resulting in one or more corrections identified by annotation data 126. Evaluation of a tagging process may include a comparison of annotation tags 122 with the corrections to those tags identified by annotation data 126. (¶[0032]: Figure 2: Step 242) Specific tags of ‘reading’ and ‘studying’ may be missing. (¶[0039]: Figure 4A) Here, an evaluation of an annotation tagging process to identify corrections and missing tags is “information identifying missed and incorrectly classified entities.”
Concerning claims 4 and 13, Farre Guiu et al. teaches “an Entity Consolidator” for content being annotated that includes episodes of a TV series set in a home having a combined living room and kitchen space. Tag confusion may be revealed where a predetermined taxonomy may be simplified to include fewer tags. (¶[0036]: Figure 3) Farre Guiu et al., then, ‘consolidates’ a taxonomy by combining tags for ‘living room’ and ‘kitchen’ (“an Entity Consolidator”). Biddle et al. discloses “an Annotation Ranker” because annotation management system 400 may be configured to generate an accuracy score that indicates a level of annotation accuracy of an annotator. (¶[0032]: Figure 4) That is, a score provides a ‘ranking’ of annotations from different annotators. Farre Guiu et al. teaches an evaluation of a tagging process resulting in an assessment of a correction process resulting in one or more corrections identified by annotation data 126. Evaluation of a tagging process may include a comparison of annotation tags 122 with the corrections to those tags identified by annotation data 126. (¶[0032]: Figure 2: Step 242) Specific tags of ‘reading’ and ‘studying’ may be missing. (¶[0039]: Figure 4A) Here, an evaluation of an annotation tagging process to identify corrections and missing tags is “information identifying missed and incorrectly classified entities.”
Concerning claim 6, Biddle et al. discloses generating a new entity at the same or a less specific hierarchical level as the entities or relation in conflict (Abstract); Annotation Agreement Manager may determine to replace the entities with a new entities on a more generic higher hierarchical level (¶[0021]); hierarchical levels may relate to a specificity of the entity, where relatively more specific entities that correspond to less words or phrases may be on a ‘lower’ hierarchical level and relatively less specific entities that correspond to more words or phrases may be on a ‘higher’ hierarchical level (“the target entity type comprises leveraging a type specificity associated with a respective mention to form the set of first resolved entities”) (¶[0027]: Figure 4); annotation management system 400 may generate a new entity at the same or less specific, e.g., higher, hierarchical level as the entities or relation in conflict; generating a new entity or relation for the conflict in question at the same or less specific hierarchical level as the entities or relation in conflict may include annotating text as a new entity or relation (¶[0048]: Figure 3: Step 346).
Concerning claims 9, 15, and 18, Biddle et al. discloses generating a new entity at the same or a less specific hierarchical level as the entities or relation in conflict (Abstract); Annotation Agreement Manager may determine to replace the entities with new entities on a more generic higher hierarchical level (¶[0021]); hierarchical levels may relate to a specificity of the entity, where relatively more specific entities that correspond to less words or phrases may be on a ‘lower’ hierarchical level and relatively less specific entities that correspond to more words or phrases may be on a ‘higher’ hierarchical level (¶[0027]: Figure 4); annotation management system 400 may generate a new entity at the same or less specific, e.g., higher, hierarchical level as the entities or relation in conflict; generating a new entity or relation for the conflict in question at the same or less specific hierarchical level as the entities or relation in conflict may include annotating text as a new entity or relation (¶[0048]: Figure 3: Step 346); here, a hierarchy of more specific and less specific entities is “a semantic hierarchy representing semantic relationship among multiple entity types and domain knowledge used to automatically identify characteristics of multiple entity types”; that is, “a semantic hierarchy” identifies that apples, bananas, and pears are types of fruit by reference to an ontology (“domain knowledge”); ‘semantic’ is defined as relating to a meaning in language.
Concerning claims 10 and 20, Biddle et al. discloses that Annotation Agreement Manager (AAM) may re-define and/or recategorize these entities to be more aligned with how these conflicts have been resolved, or may determine to simply remove these entities and/or relations, e.g., replacing the entities with new entities on a more generic higher hierarchical level (“wherein resolving the set of first resolved entities to create a set of final entities comprises removing of conflicting entity types”) (¶[0021]); resolving the identified conflict may then be based on the accuracy score associated with the entities or relations annotated by the annotator (¶[0032]: Figure 4); annotation management system 400 may generate an accuracy score that indicates a level of annotation accuracy for a respective annotator based on the identified pattern of conflicts (¶[0061]: Figure 4); annotation management system 400 may be configured to resolve identified conflicts based on one or more accuracy scores associated with the entities or relations annotated by one or more annotators of the conflict (¶[0064]: Figure 4). Implicitly, an entity annotation is resolved by selecting an entity annotation with a higher score (“by favoring higher scoring entity types”).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Biddle et al. (U.S. Patent Publication 2020/0175106) in view of Farre Guiu et al. (U.S. Patent Publication 2022/0245554) as applied to claim 1 above, and further in view of Carus et al. (U.S. Patent Publication 2016/0350283).
Biddle et al. discloses a plurality of annotators and determining if one of the annotators is consistently annotating a certain type of entities incorrectly. (¶[0020]) Annotation management system 400 may determine a score for a plurality of annotators where positive credit is applied each time an annotation is accepted as a correct annotation and a negative credit is applied each time an annotation is rejected to identify the relative strengths and weaknesses of the respective annotators with regard to a particular topic. (¶[0061]: Figure 4) Arguably, Biddle et al., then, discloses “wherein the knowledge base contains a semantic specification of the capabilities of each of the NER annotators, and a set of weights associated to each entity and NER annotation pair.” Here, “a set of weights” for an annotator is equivalent to these positive and negative credits. Still, Biddle et al. does not expressly disclose “a semantic specification”. However, Carus et al. teaches a similarity measurement utility that uses data from broad-coverage lexical knowledge bases to create implementations with varying degrees of customization and enabling changes that occur over time. (Abstract) Specifically, model creation configuration includes specifications used to create semantic models and semantic mapping specifications. An oversampling specification specifies how the weights for two or more base models are combined and weighting method specifications determine the weighting method that is used to generate a run-time corpus model. (¶[319] - ¶[0320]) Carus et al., then, teaches a semantic specification of a knowledge base and weights to assign to base models of a knowledge base. An objective is to create implementations with varying degrees of customization and enabling changes that occur over time. It would have been obvious to one having ordinary skill in the art to provide a semantic specification and a set of weights of a knowledge base as taught by Carus et al. as applied to entity annotators of Biddle et al. for a purpose of creating implementations with varying degrees of customization and enabling changes that occur over time.
Response to Arguments
Applicants’ arguments filed 31 March 2026 have been fully considered but they are not persuasive.
Applicants amend independent claims 1, 11, and 16 to set forth a new limitation of “wherein the length of mention information is larger when an associated mention is part of a larger entity mention”, and present arguments traversing the prior rejection of the independent claims as being obvious under 35 U.S.C. §103 over Biddle et al. (U.S. Patent Publication 2020/0175106) in view of Farre Guiu et al. (U.S. Patent Publication 2022/0245554). Specifically, Applicants argue that a length of mention information is not equivalent to a size of length of an entity type, i.e., a length of an annotated label, and that because a length of a ‘mention’ is not equivalent to a length of a label, the prior art does not teach changing an assigned entity type to a target entity type based on a length of mention information “wherein the length of mention information is larger when an associated mention is part of a larger entity mention”. Applicants’ argument appears to focus on a concept of a ‘mention’ as described as a specific named entity that may be assigned one or more entity types at ¶[0010] of the Specification. Additionally, Applicants cite ¶[0031] of the Specification as describing that a length of mention information corresponds to a length of a given mention, which they construe as a number of words or phrases in the mention.
Applicants’ amendment has not addressed the claim objection directed to “each NER”. Because Applicants appear to be trying to draw an equivalence between an “NER”, i.e., Named Entity Recognition, and a “mention”, and to draw a distinction between an “NER” and “an entity type”, a lack of antecedent basis for “each NER” appears to be a somewhat significant issue here. Applicants’ claim language sets forth antecedent basis for “Named Entity Recognition (NER) annotators” but there is no antecedent basis for an “NER” in “each NER”. A ‘mention’ might correspond to a ‘Named Entity’, but an “NER” or “Named Entity Recognition” would only have meaning as a descriptive of some process being performed.
Applicants’ arguments are not persuasive and the rejection is being maintained for the independent claims as obvious under 35 U.S.C. §103 over Biddle et al. (U.S. Patent Publication 2020/0175106) in view of Farre Guiu et al. (U.S. Patent Publication 2022/0245554). Generally, Biddle et al., at ¶[0027], states:
The annotation management system 400 may resolve this identified conflict by, e.g., identifying the correct annotation between the conflicting options, splitting the annotated text into two separate entities or relations, generating a new entity at the same or higher hierarchical level as the entities or relation in conflict, or changing an annotation of the annotated version of the document. Hierarchical levels may relate to a specificity of the entity, where relatively more specific entities that correspond to less words or phrases may be on a “lower” hierarchical level and relatively less specific entities that correspond to more words or phrases may be on a “higher” hierarchical level. (emphasis added)
Additionally, Biddle et al. discloses generating a new entity at the same or a less specific hierarchical level as the entities or relation in conflict. (Abstract) Biddle et al., at ¶[0048]: Figure 3, discloses:
. . . annotation management system 400 may generate a new entity at the same or less specific (e.g., higher) hierarchical level as the entities or relation in conflict (346). In some examples, generating a new entity or relation for the conflict in question at the same or less specific hierarchical level as the entities or relation in conflict may include annotating text as a new entity or relation.
Biddle et al., then, generates a corrected annotation, or “the target entity type”, based on two factors: (1) whether a specific entity is more or less specific in a hierarchy of an ontology, and (2) whether a specific entity has more or fewer words or phrases. Broadly, it is contended that whether a specific entity has more or fewer words or phrases corresponds to the claim limitation of “the length of mention information is larger when an associated mention is part of a larger entity mention”. That is, an entity mention is larger because there are more words in the entity mention, or the phrase represented by the entity mention has more words.
A preference of correcting an annotation so that a target entity type is more specific and has a length of mention that is larger as being part of a larger entity mention as compared to an assigned entity type is taught by Farre Guiu et al. Here, Farre Guiu et al., at ¶[0039], teaches an embodiment of correcting annotation tags to change an annotation of text from ‘house’ to ‘Cattleman's Ranch Steakhouse’:
As further shown in FIG. 4A, in circumstances in which a high number or percentage of annotation tags 122 applied to content 116 by human tagger 120a or automated content annotation system 120b are corrected during QA review, and where the corrected tags are overly generic, reports 428A to human tagger 120a or automated content annotation system 120b may read: “Your QA reviewer suggests that you use the more specific tags ‘Cattleman's Ranch Steakhouse’ or ‘Huang family house’ instead of the generic tag ‘house’ when possible.” (emphasis added)
Here, ‘Cattleman's Ranch Steakhouse’ is more specific as compared to ‘house’ and ‘Cattleman's Ranch Steakhouse’ has a “length of mention” that is “larger” as compared to ‘house’ because the former has more words than the latter. Additionally, ‘house’ “is part of a larger entity mention” of ‘Cattleman's Ranch Steakhouse’. That is, ‘Cattleman's Ranch Steakhouse’ has a component word that is ‘house’ so that ‘Cattleman's Ranch Steakhouse’ is a larger mention that ‘house’ is part of.
Applicants’ argument that the length of mention information is not equivalent to a size of length of an entity type, i.e., a length of an annotation label, is not persuasive under a broadest reasonable interpretation. During patent examination, the pending claims must be “given their broadest reasonable interpretation consistent with the specification.” Phillips v. AWH Corp., 415 F.3d 1303, 1316, 75 USPQ2d 1321, 1329 (Fed. Cir. 2005) Because applicant has the opportunity to amend the claims during prosecution, giving a claim its broadest reasonable interpretation will reduce the possibility that the claim, once issued, will be interpreted more broadly than is justified. In re Yamamoto, 740 F.2d 1569, 1571 (Fed. Cir. 1984); In re Zletz, 893 F.2d 319, 321, 13 USPQ2d 1320, 1322 (Fed. Cir. 1989) (“During patent examination the pending claims must be interpreted as broadly as their terms reasonably allow.”); In re Prater, 415 F.2d 1393, 1404-05, 162 USPQ 541, 550-51 (CCPA 1969) See MPEP §2111.
Mainly, Biddle et al. discloses three ways to resolve an annotation conflict: (1) identify a correct annotation (Step 342), (2) split into separate entities or relations (Step 344), and (3) generate a new entity (Step 346). See Abstract and ¶[0048]: Figure 3. Conceptually, then, generating an annotation with a target entity type can include splitting up an entity into component words, and this is what is being performed in the embodiment of preferring a larger entity mention of ‘Cattleman's Ranch Steakhouse’ instead of a more generic entity mention of ‘house’ in Farre Guiu et al. That is, ‘Cattleman's Ranch Steakhouse’ could be split up in an annotation into component entity words of ‘Cattleman’s’, ‘Ranch’, ‘Steak’, and ‘house’ or an entity could be labeled as ‘‘Cattleman's Ranch Steakhouse’. Farre Guiu et al.’s annotation and tagging comprises a preference for more specific and larger mentions. One could construct a similar embodiment for an annotation of ‘Chicago Cubs’ which would be annotated to split into ‘Chicago’ (a city) and ‘Cubs’ (an animal) or ‘Chicago Cubs’ (a baseball team). Here, ‘Cattleman's Ranch Steakhouse’ might be a more meaningful annotation as a name of a restaurant as compared to trying to understand its meaning with an entity type of a type of ‘house’. Applicants’ argument, then, that a length of mention information is not equivalent to a length of an entity type or a length of a label is not completely persuasive because a corrected annotation in (2) split into separate entities or relations (Step 344) of Biddle et al. does determine a length of mention of an entity during the annotation. The conflict between the annotations can be in how the entities are split up into individual words, so that resolving the annotation conflict by an entity type is a choice in whether to favor a longer number of words or a shorter number of words, i.e., “length of mention information”. Biddle et al. discloses the basic idea of resolving a conflict between annotations by “splitting into separate entities or relations” and Farre Guiu et al. teaches an embodiment that favors more specific and more lengthy annotations of Cattleman's Ranch Steakhouse’ instead of a more generic and shorter ‘house’.
Generally, a broadest reasonable interpretation must be ‘consistent with the Specification’. However, Applicants’ Specification does not clearly describe embodiments that distinguish this claim language over the prior art. The Specification, ¶[0010], states that a mention of a specific named entity could be ‘Harris’, and ‘Harris’ could be assigned an entity type of PERSON or an entity type of COMPANY. Additionally, Applicants’ Specification, ¶[0031], describes an embodiment of using length of mention information of COUNTRY as part of a larger entity mention of ADDRESS, so that COUNTRY is converted into a larger entity mention of ADDRESS. Given this limited description it is not completely clear why ADDRESS would necessarily be a larger entity mention as compared to COUNTRY, or that ADDRESS would be a more specific entity type as compared to COUNTRY in the ordinary meaning of those terms. Still, one skilled in the art might understand that ADDRESS can include a plurality of fields that include STREET NAME, CITY, STATE, and possibly COUNTRY, so that an annotation might be preferred for an entire block of text of ADDRESS instead of annotating the individual fields including COUNTRY. Nevertheless, Farre Guiu et al.’s embodiment of favoring a more specific and more lengthy annotation of ‘Cattleman's Ranch Steakhouse’ instead of a more generic and shorter ‘house’ appears to be equivalent to Applicants’ embodiment of favoring an annotation of a more specific and larger length entity type of ADDRESS over a more generic entity type of COUNTRY.
Applicants’ arguments are not persuasive. There are no new grounds of rejection. Accordingly, this rejection is properly FINAL.
Conclusion
The prior art made of record and not relied upon is considered pertinent to Applicants’ disclosure.
Brancovici et al., Martinez Galindo et al., Johnson et al., Blume et al., Kanani et al., Asthana et al., and Bay et al. disclose related prior art.
THIS ACTION IS MADE FINAL. Applicants are reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARTIN LERNER whose telephone number is (571) 272-7608. The examiner can normally be reached Monday-Thursday 8:30 AM-6:00 PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached on (571) 272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARTIN LERNER/Primary Examiner
Art Unit 2658 April 10, 2026