DETAILED ACTION
This communication is in response to the Amendments and Arguments filed on 10/3/2025.
Claims 1-3,5-10,12-17 and 19-20 are pending and have been examined.
All previous objections / rejections not mentioned in this Office Action have been withdrawn by the examiner.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments / Amendments
Regarding the Applicant’s arguments for the rejections under 35 U.S.C. § 101, Applicant has amended independent claim 1, 8, and 15. Applicant argues that the amended language are steps that cannot be performed in the human mind. Applicant further argues that the added limitation integrates the allegedly abstract idea into a practical application. Examiner respectfully disagrees. During patent examination, pending claims must be “given their broadest reasonable interpretation consistent with the specification.” MPEP 2111. First, the contextual text machine learning encoder and the contextual entity machine learning encoder is interpreted to be as a set of rules or instructions to obtain a text representation and entity representation from text. Also, the trained text domain-specific scorer machine learning model is also interpreted as a set of instructions to obtain relevance scores. A human mind can think of a text representation and an entity representation from text following instructions. Furthermore, the human mind can think of a score between the text representation and entity representation by following a set of instructions. Second, the claims as recited does not integrate the abstract idea into a practical application. The recitations are high level and describe the use of generic computing components to carry out the abstract idea. The claim does not recite a specific technological improvement to the shared encoder or domain-specific scorer, nor does it describe how the claimed operations improve the functioning of the computer or other technology in a concrete, technical way. The claim describes combining text and entity to obtain a relevance score, which are high level concepts that can be performed in the mind. Therefore, the claims as currently recited does not overcome the 35 U.S.C. § 101 abstract idea rejection.
Regarding the Applicant’s arguments for the rejections under 35 U.S.C. § 102, Applicant has amended independent claim 1, 8, and 15. Hence, the Applicant’s arguments are moot in view of new grounds of rejection.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claim 1, 8, and 15 the limitations of “receiving a text”, “determining one or more named entities based on the text”, “for each named entity of the one or more named entities:”, “generating, using a trained shared encoder, a contextual embedding by:”, “generating a contextual text embedding via a contextual text machine learning encoder of the trained shared encoder in response to input to the contextual text encoder of a canonical named entity and text context pair”, “generating a contextual entity embedding via a contextual entity machine learning encoder of the trained shared encoder in response to input of the canonical named entity and text context pair to the contextual entity machine learning encoder”, “pairwise combining the contextual text embedding and the contextual entity embedding to produce the contextual embedding”, “determining relevance scores using trained text domain-specific scorer machine learning models and the contextual embedding”, and “providing the relevance scores to a named entity recommendation system”, as drafted, are processes that, under broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. More specifically, the mental process of a human reading text, thinking of words or phrases of the text and associated entities that is part of a domain taxonomy, writing on paper using a pen or pencil phrase and associated entities, thinking of a relevance score between the written phrase and associated entities, and writing the relevance score on paper. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the --Mental Processes-- grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
This judicial exception is not integrated into a practical application because the recitation of an application on a system in claim 8 and a non-transitory computer readable media in claims 15, reads to generalized computer components, based upon the claim interpretation wherein the structure is interpreted using P0095-P0117 in the specification. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of using generalized computer components to read text, think of words or phrases of the text and associated entities that is part of a domain taxonomy, write on paper using a pen or pencil phrase and associated entities, think of a relevance score between the written phrase and associated entities, and write the relevance score on paper amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible. Furthermore,
With respect to claim 2, 9, and 16, the claim recites “matching a sequence of characters of the text to a sequence of characters associated with the named entity in an index [data structure]”, which reads on a human thinking of words/characters in text and matching named entities in a taxonomy tree for a domain in the mind. No additional limitations are present.
With respect to claim 3, 10, and 17, the claim recites “generating a first embedding representing the named entity”, “generating a second embedding representing a portion of the text”, and “comparing the first embedding to second embedding according to a similarity function” which reads on a human comparing the entity written on paper and portion of text written on paper and comparing to see if the words are similar in the mind. No additional limitations are present.
With respect to claim 5, 12, and 19, the claim recites “determining a text domain of a plurality of pre-determined text domains to which the text belongs”, “wherein each text domain of the plurality of pre-determined text domains corresponds to a respective text domain-specific machine learning model”, and “selecting the text domain-specific machine learning model to use to determine the respective relevance score for each named entity of the one or more named entities based on the determined text domain to which the text belongs”, which reads on a human determining a domain for text read and determining a relevance score for text read compared to named entity utilizing the understanding of the particular domain which the text belongs to in the mind. No additional limitations are present.
With respect to claim 6, 13, and 20, the claim recites “wherein determining the one or more named entities based on the text comprises determining a parent named entity, a sibling named entity, or a child named entity in a named entity taxonomy of a named entity of the one or more named entities”, which reads on a human thinking and determining named entities for text phrases according to taxonomy hierarchy that the entity belongs in the mind. No additional limitations are present.
With respect to claim 7 and 14, the claim recites “wherein the named entity recommendation system to which the respective relevance score determined for the named entity is provided determines to recommend the named entity based on the respective relevance score”, which reads on a human writing the named entity for a text phrase that has the highest relevance score that was thought of in the mind. No additional limitations are present.
These claims further do not remedy the judicial exception being integrated into a practical application and further fail to include additional elements that are sufficient to amount to significantly more than the judicial exception.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 3, 7, 8, 10, 14, 15, and 17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tomberg et al. (U.S. PG Pub No. 20200311115), hereinafter Tomberg.
Regarding claim 1, 8, and 15 Tomberg teaches:
(Claim 1) A method comprising: (P0005, Method for mapping of text phrases in a corpus to a taxonomy.)
(Claim 8) A system, comprising: at least one computer comprising at least one processor and at least one memory, the at least one computer configured to: (P0013, System for mapping of text phrases in a corpus to a taxonomy, the system comprising one or more processors and memory, the memory storing the corpus and taxonomy, the one or more processors in communication with the memory and configured to execute.)
(Claim 15) One or more non-transitory computer-readable media comprising computer-executable instructions that, when executed, cause at least one processor to perform actions comprising: (P0013, System for mapping of text phrases in a corpus to a taxonomy, the system comprising one or more processors and memory, the memory storing the corpus and taxonomy, the one or more processors in communication with the memory and configured to execute.)
receiving a text; (P0053, The input module receives the corpus. … Mapping of text phrases in a corpus to a taxonomy.)
determining one or more named entities based on the text; and (P0005, Mapping the set of word embeddings to the set of node embeddings using a mapping function.; P0007, Mapping the text phrases in the corpus to a set of word embeddings comprises performing at least one of GloVe and fastText.; P0008, The taxonomy comprises a graph with concepts at each vertex and relationships between respective concepts at the edges connecting respective vertices.)
generating, using a trained shared encoder, a contextual embedding by: (P0007, Mapping the text phrases in the corpus to a set of word embeddings comprises performing at least one of GloVe and fastText.; P0013, A corpus module to map the text phrases in the corpus to a set of word embeddings in a word embedding space, where each sequence of word embeddings corresponds to individual words in one of the text phrases.)
generating a contextual text embedding via a contextual text machine learning encoder of the trained shared encoder in response to input to the contextual text encoder of a canonical named entity and text context pair; (P0007, Mapping the text phrases in the corpus to a set of word embeddings comprises performing at least one of GloVe and fastText.; P0013, A corpus module to map the text phrases in the corpus to a set of word embeddings in a word embedding space, where each sequence of word embeddings corresponds to individual words in one of the text phrases.; P0010, Concatenating the word embeddings into a single multi-dimensional vector; and determining a linear mapping comprising a matrix multiplication of the points in the node embedding space and the single multi-dimensional vector, wherein the linear mapping is the mapping function.)
generating a contextual entity embedding via a contextual entity machine learning encoder of the trained shared encoder in response to input of the canonical named entity and text context pair to the contextual entity machine learning encoder; and (P0060, The taxonomy module can use any suitable taxonomy to which the mapping module maps phrases.; P0061, To construct taxonomy embeddings, taxonomy module can use any suitable embedding approach. In an example, the taxonomy module can use the node2vec approach.)
pairwise combining the contextual text embedding and the contextual entity embedding to produce the contextual embedding: (P0062, [Equation, Matrix multiplication of p and q representing word embedding matrix and taxonomy embedding to find the cosine similarity.])
determining relevance scores using trained text domain-specific scorer machine learning models and contextual embedding; and (P0019, The mapping module further generates the mapping function, comprising: training a convolutional neural network using phrase-concept pairs previous labelled for at least a portion of the taxonomy, the convolutional neural network taking as input the set of word embeddings and the set of node embeddings, the convolutional neural network comprising applying convolutional filters to the input vectors to generate feature maps, feeding the feature maps into a pooling layer, and projecting the output of the pooling layer to obtain an output of a reduced dimension, wherein the trained convolutional neural network is the mapping function.; P0062, The mapping module can map between phrases and concepts in the target taxonomy by associating points in the node embedding vector space to sequences of word embeddings corresponding to individual words in a phrase. … The present inventors tested two measures of closeness in the node embedding vector space: Euclidean distance and cosine similarity.; P0063, Three different architectures are provided as examples herein, although others may be used: a linear mapping, a convolutional neural network (CNN), and a bidirectional long short term memory network (Bi-LSTM).)
providing the relevance to a named entity recommendation system. (P0013, Mapping module to map the set of word embeddings to the set of node embeddings using a mapping function, the mapping function outputting points in the node embedding space associated with sequences in the word embeddings; and an output module to output the mapping function.)
Regarding claim 3, 10, and 17 Tomberg teaches claim 1, 8, and 15.
Tomberg further teaches:
generating a first embedding representing the named entity; (P0061, To construct taxonomy embeddings, taxonomy module can use any suitable embedding approach.)
generating a second embedding representing a portion of the text; and (P0053, The corpus module maps the corpus to word embeddings; P0065, Word embeddings may be concatenated.)
comparing the first embedding to second embedding according to a similarity function. (P0062, Mapping function can map the sequence of input vectors to target vectors (p), which in this example is in a 128 dimensional node embedding vector space. To find the corresponding node related to those words, the mapping module determines the closest node to the target vectors (p). In an example experiment of the biomedical example, the present inventors tested two measures of closeness in the node embedding vector space 128: Euclidean distance and cosine similarity.)
Regarding claim 7 and 14 Tomberg teaches claim 1 and 8.
Tomberg further teaches:
wherein the named entity recommendation system to which the respective relevance score determined for the named entity is provided determines to recommend the named entity based on the respective relevance score. (P0062, To find the corresponding node related to those words, the mapping module determines the closest node to the target vectors (p). In an example experiment of the biomedical example, the present inventors tested two measures of closeness in the node embedding vector space 128: Euclidean distance and cosine similarity.)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 5, 12, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Tomberg in view of Aoki (U.S. PG Pub No. 20090094177).
Regarding claim 5, 12, and 19 Tomberg teaches claim 1, 8, and 15.
Tomberg further teaches:
wherein each text domain of the plurality of pre-determined text domains corresponds to a respective text domain-specific machine learning model; and (P0019, The mapping module further generates the mapping function, comprising: training a convolutional neural network using phrase-concept pairs previous labelled for at least a portion of the taxonomy, the convolutional neural network taking as input the set of word embeddings and the set of node embeddings, the convolutional neural network comprising applying convolutional filters to the input vectors to generate feature maps, feeding the feature maps into a pooling layer, and projecting the output of the pooling layer to obtain an output of a reduced dimension, wherein the trained convolutional neural network is the mapping function.)
selecting the text domain-specific machine learning model to use to determine the respective relevance score for each named entity of the one or more named entities based on the determined text domain to which the text belongs. (P0011, The method further comprising generating the mapping function, comprising training a convolutional neural network using phrase-concept pairs previous labelled for at least a portion of the taxonomy, the convolutional neural network taking as input the set of word embeddings and the set of node embeddings, the convolutional neural network comprising applying convolutional filters to the input vectors to generate feature maps, feeding the feature maps into a pooling layer, and projecting the output of the pooling layer to obtain an output of a reduced dimension, wherein the trained convolutional neural network is the mapping function.; P0013, Receive the corpus and the taxonomy; a corpus module to map the text phrases in the corpus to a set of word embeddings in a word embedding space, where each sequence of word embeddings corresponds to individual words in one of the text phrases; a taxonomy module to vectorize the taxonomy to a set of node embeddings in a node embedding vector space; and a mapping module to map the set of word embeddings to the set of node embeddings using a mapping function, the mapping function outputting points in the node embedding space associated with sequences in the word embeddings; and an output module to output the mapping function.; [The selection of text domain-specific machine learning model is evident in domain taxonomy being known or input as described in Fig. 3. The domain specific taxonomy trained mapping function utilizes a convolution neural network is trained for the specific text domain to produce embedding distances or relevance score.])
Tomberg does not specifically teach:
determining a text domain of a plurality of pre-determined text domains to which the text belongs;
Aoki, however, teaches:
determining a text domain of a plurality of pre-determined text domains to which the text belongs; (P0014, The method comprises calculating a category score for categories to which a digital document may be classified. The category score is based on the relevance of the text in document.)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to determine a text domain for text. It would have been obvious to combine the references because classification of text is a known technique in the art to yield a predictable result of classifying text into categories according to relevance of the text in document to the category. (Aoki P0014)
Claims 2, 6, 9, 13, 16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Tomberg in view of Delgo et al. (U.S. PG Pub No. 20180232443), hereinafter Delgo.
Regarding claim 2, 9, and 16 Tomberg teaches claim 1, 8, and 15.
Tomberg does not specifically teach:
matching a sequence of characters of the text to a sequence of characters associated with the named entity in an index [data structure].
Delgo, however, teaches:
matching a sequence of characters of the text to a sequence of characters associated with the named entity in an index [data structure]. (P0132, In the first step, named entity recognition and linking is performed, annotating the text with where certain named entities occur, and their respective type. The named entities may be found in the knowledge graph and contextual ontology by using a string matching index.)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to match sequence of characters with named entity in an index. It would have been obvious to combine the references because matching characters in text allows for performance of named entity recognition and named entity linking on the user-provided text to associate parts of the text with known named entities described in the knowledge graph. String matching is a known technique to yield a predictable result of associating text to entities in a knowledge graph. (Delgo P0015)
Regarding claim 6 , 13, 20 Tomberg teaches claim 1, 8, and 15.
Tomberg does not specifically teach:
wherein determining the one or more named entities based on the text comprises determining a parent named entity, a sibling named entity, or a child named entity in a named entity taxonomy of a named entity of the one or more named entities.
Delgo, however, teaches:
wherein determining the one or more named entities based on the text comprises determining a parent named entity, a sibling named entity, or a child named entity in a named entity taxonomy of a named entity of the one or more named entities. (P0132, The named entities may be found in the knowledge graph and contextual ontology by using a string matching index.; P0046, A knowledge graph may rely on a global ontology or a local ontology. A local ontology may be a contextual ontology constructed of a smaller set of entities and properties derived from the entities and properties defined in the global ontology. For example, a contextual ontology which defines a ‘ServiceProvider’ entity type as a subclass of a ‘Corporation’ type defined in the global ontology. A subclass relation endows the child object with all properties and relations associated with the parent object. In this example, the ‘ServiceProvider’ entity has ‘officesLocation’ property if one was defined for the ‘Corporation’ entity in the global ontology. A contextual ontology is limited to entities and relations of interest in the specific context of a particular question (or set of questions) the Project Brief.)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to determine parent, sibling, or child in entity taxonomy when determining entity for text. It would have been obvious to combine the references because inheritance relationship may exist in the various ontology definitions that carry semantic meaning for practical purposes. (Delgo P0045)
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL WONSUK CHUNG whose telephone number is (571)272-1345. The examiner can normally be reached Monday - Friday (7am-4pm)[PT].
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PIERRE-LOUIS DESIR can be reached at (571)272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL W CHUNG/Examiner, Art Unit 2659
/PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659