Prosecution Insights
Last updated: April 19, 2026
Application No. 18/637,046

SYSTEM AND METHOD FOR WEIGHTED IDENTITY RETRIEVAL

Non-Final OA §103
Filed
Apr 16, 2024
Examiner
TOUGHIRY, ARYAN D
Art Unit
2165
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
3 (Non-Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
88%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
128 granted / 189 resolved
+12.7% vs TC avg
Strong +20% interview lift
Without
With
+19.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
17 currently pending
Career history
206
Total Applications
across all art units

Statute-Specific Performance

§101
7.0%
-33.0% vs TC avg
§103
64.4%
+24.4% vs TC avg
§102
14.9%
-25.1% vs TC avg
§112
7.0%
-33.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 189 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/7/2026 has been entered. Response to Arguments Applicant's arguments filed 1/7/2026 have been fully considered 35 USC § 102 & 35 USC § 103: Regarding Applicant’s Argument (pages: 8-9): Examiner’s response:- Applicant’s arguments, filed 1/7/2026, with respect to the rejection(s) of under 35 USC § 102/103 have been fully considered, upon further consideration, a new ground(s) of rejection is made in view of US 20140365216 A1; GRUBER; Thomas R. et al. (hereinafter Gruber). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-7 are rejected under 35 U.S.C. 103 as being unpatentable over US 20220292262 A1; Japa; Sai Sharath et al. (hereinafter Japa) in view of US 20120296891 A1; Rangan; Venkat (hereinafter Rangan), US 20230186052 A1; WONG; Jimmy Chi Kin et al. (hereinafter Wong), US 20090030894 A1; Mamou; Jonathan Joseph et al. (hereinafter Mamou) and US 20140365216 A1; GRUBER; Thomas R. et al. (hereinafter Gruber) Regarding claim 1, Japa teaches A computer-implemented method, executed on a computing device, comprising: processing a query for obtaining data from an unstructured database; (Japa [0034] In various embodiments, the content sources 175 include broadcast television and radio sources, video on demand platforms and streaming video and audio services platforms, one or more content data networks, data servers, web servers and other content servers, and/or other sources of media.[0037] The example system 100 further includes a question-answer system adapted to determine answers to natural language questions from information maintained by the knowledge base management system 180. For example, the system 100 may include a question answer (QA) server 183 hosting a back-end question-answer service. The QA server 183 receives a query via the communication network 125, processes the query and generates an answer according to information maintained by the knowledge base management system. In some embodiments, the QA server 183 is collocated [0040] The QA server 183 may process the query to determine an answer and forward the answer to the user via one or more of a voice response via the telephone device 134 and/or via some other mode, such as an email or test message. The answer may be converted from text to voice at the QA server 183, within the communications network 125 for delivery to the user via the voice access 130. [FIG.1 & 3] shows a visual of the overall query system obtaining data from an unstructured database) generating a parsed representation of a query field of the query; (Japa [0004] knowledge-based QA systems may accept natural language as a query, offering a more user-friendly solution. There are two primary approaches for the task of QA: (i) semantic parsing based systems (SP-based), and (ii) information retrieval-based systems (IR-based). The SP-based approaches address the QA problem by constructing a semantic parser that converts a natural language question into a conditionally structured expressions, like the logical forms, and then run the query on the knowledge base to obtain the answer. The SP-based approaches generally convert candidate entity-predicate pairs into a query statement and query the knowledge base to obtain an answer. Example SP-based systems may include three modules: (i) an entity linking module, adapted to recognizes all entity mentions in a question and link each mention to an entity in the knowledge base; (ii) a predicate mapping module adapted to find candidate predicates for the question within the knowledge base; and (iii) an answer selection module.[0093] A pre-trained BERT embeddings base uncased version was used for knowledge base-QA training. During tokenization, BERT code uses a word-piece algorithm to split words into sub-words and all less frequent words will be split into two or more sub-words. The vocabulary size of BERT was 30522. A delexicalization strategy was adopted. For each question, the candidate entity mentions those belonging to date, ordinal, or number are replaced with their type. Same is applied on answer context from knowledge base text if the overlap belongs to above type. This assures that the query matches up with answer context in the embedding space.[FIG.2D] shows corresponding visual of system flow) generating a vectorized representation…identifying a matching input field from the unstructured database by querying the unstructured database for the vectorized representation of the query field against a plurality of indexes using a vector search mechanism; (Japa [0018] The natural language question and the contextual information of the group of other entities of the candidate answer set are separately encoded to obtain an encoded vectorial representation of the natural language question and a plurality of encoded vectorial representations of the candidate answer set.[0076] These three equations correspond to a formation process of the self multi-head attention mechanism. The matrix of W is a weight matrix. The Q, V, K represent query, value, key vectors, that each multiplies its corresponding weight matrix before getting into the attention function. Repeat this process h times, according to the number of heads, h. Each of the results may be connected to obtain a new vector matrix that reflects a relationship between the query and value vectors Q and V. In particular, the self multi-head attention mechanism, is adapted to expose internal connections within words, with Q=V=K=X, with X representing the word vector matrix.[0077] The multi-head attention mechanism helps the model learn the words relevant information in different presentation sub-spaces. The self-attention mechanism can extract the dependence in words. As the name shows, the self multi-head attention mechanism integrates the benefits of both, creates a context vector for each word. [0160] artificial intelligence (AI) ...approaches comprise, e.g., naïve Bayes, Bayesian networks, decision trees, neural networks, fuzzy logic models, and probabilistic classification models providing different patterns of independence can be employed. Classification as used herein also is inclusive of statistical regression that is utilized to develop models of priority.[FIG.2B and 2D] shows corresponding visual of system flow) scoring the matching input field based upon, at least in part, weighting from a domain model; (Japa [0018] The encoded vectorial representation of the natural language questions is evaluated under an influence of a plurality of aspects of the contextual information to obtain a plurality of score values. A member of the candidate answer set is selected according to the plurality of score values to obtain a selected one of the candidate answer set as an answer to the natural language question. [0027] neural network, such as a bidirectional encoder representations from transformers (BERT) model. [0042] The QA processing module 202 uses a scoring process to score processed results of the encoded question under the influence of each answer of the candidate answer set. The QA processing module 202 evaluates the scores, e.g., according to a ranking and/or according to a score threshold to distinguish one or more answers from the candidate answer set. For example, the QA processing module 202 may determine independent scores [0076] self multi-head attention mechanism. The matrix of W is a weight matrix. The Q, V, K represent query, value, key vectors, that each multiplies its corresponding weight matrix before getting into the attention function. Repeat this process h times, according to the number of heads, h. Each of the results may be connected to obtain a new vector matrix that reflects a relationship between the query and value vectors Q and V. In particular, the self multi-head attention mechanism [FIG.2B and 2D] shows corresponding visual of system flow) and providing a weighted result to the query using the scoring of the matching input field. (Japa [0018] A candidate answer set is gathered within the knowledge graph that includes a group of other entities. The other entities are gathered according to a predetermined proximity to the focal node, and contextual information for the group of other entities of the candidate answer set is generated from the knowledge graph. The natural language question and the contextual information of the group of other entities of the candidate answer set are separately encoded to obtain an encoded vectorial representation of the natural language question and a plurality of encoded vectorial representations of the candidate answer set. The encoding uses pre-trained language model embeddings obtained via a pre-trained bidirectional encoder representations from transformer (BERT) encoder. The encoded vectorial representation of the natural language questions is evaluated under an influence of a plurality of aspects of the contextual information to obtain a plurality of score values. A member of the candidate answer set is selected according to the plurality of score values to obtain a selected one of the candidate answer set as an answer to the natural language question.[0027] neural network, such as a bidirectional encoder representations from transformers (BERT) model. [0042] The QA processing module 202 uses a scoring process to score processed results of the encoded question under the influence of each answer of the candidate answer set. The QA processing module 202 evaluates the scores, e.g., according to a ranking and/or according to a score threshold to distinguish one or more answers from the candidate answer set. For example, the QA processing module 202 may determine independent scores [0076] self multi-head attention mechanism. The matrix of W is a weight matrix. The Q, V, K represent query, value, key vectors, that each multiplies its corresponding weight matrix before getting into the attention function. Repeat this process h times, according to the number of heads, h. Each of the results may be connected to obtain a new vector matrix that reflects a relationship between the query and value vectors Q and V. In particular, the self multi-head attention mechanism [FIG.2B and 2D] shows corresponding visual of system flow) Japa lacks orderly and explicitly teaching generating a fuzzified representation of the parsed representation of the query field; However Rangan teaches generating a fuzzified representation of the parsed representation of the query field; (Rangan [0152] FIG. 12 is a block diagram illustrating a vector-ordered index associated with semantic space 1200 in one embodiment according to the present invention. In this example, all data vectors in semantic space 1200 are broken into some number of discrete blocks. For the purposes of this discussion, a 4-way block split is considered. Assuming 4K bits in the vector, a 4-way split is shown. Processing system 100 may organize the first block to allow fir an efficient exact comparison of an initial 1024 bits with fuzzy comparison of the rest of the bits. Processing system 100 may further organize the second block where the second set of 1024 bits are positioned first. This allows efficient access to those vectors that have an exact match on the segment 1024-2047 bits but have a fuzzy match on 0-1023 and 2048-4096 bits. By storing four different representations of fuzzy vectors, processing system 100 is able to narrow the search space, and still perform reasonably small number of vector comparisons [FIG.2] shows a overall visual) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of Rangan in order to have a more efficient and accuracy of the system via fuzzy vector representations (Rangan [0152] FIG. 12 is a block diagram illustrating a vector-ordered index associated with semantic space 1200 in one embodiment according to the present invention. In this example, all data vectors in semantic space 1200 are broken into some number of discrete blocks. For the purposes of this discussion, a 4-way block split is considered. Assuming 4K bits in the vector, a 4-way split is shown. Processing system 100 may organize the first block to allow fir an efficient exact comparison of an initial 1024 bits with fuzzy comparison of the rest of the bits. Processing system 100 may further organize the second block where the second set of 1024 bits are positioned first. This allows efficient access to those vectors that have an exact match on the segment 1024-2047 bits but have a fuzzy match on 0-1023 and 2048-4096 bits. By storing four different representations of fuzzy vectors, processing system 100 is able to narrow the search space, and still perform reasonably small number of vector comparisons [FIG.2] shows a overall visual) The combination lacks explicitly and orderly teaching wherein a fuzzification type to perform on the parsed representation of the query field is determined based on a weighting assigned to the query field; However Wong teaches wherein a fuzzification type to perform on the parsed representation of the query field is determined based on a weighting assigned to the query field; (Wong [0025] To improve training of the neural network model 162, the source tickets 164 may include negative samples: samples that might appear to be related, but have been determined to be unrelated. The ticket generator 114 of the computing device 110 may be configured to generate data for training the neural network model 162, for example, by generating negative samples. In some examples, the ticket generator 114 stores the negative samples within the source tickets 164. However, in other examples, the ticket generator 114 dynamically generates the negative samples without storing them within the source tickets 164. This approach may substantially reduce an amount of memory needed to train the neural network model 162 by reducing a number of tickets that are stored in memory. Although the ticket generator 114 is shown as part of the computing device 110, the ticket generator 114 may be incorporated into the computing device 120, into the computing device 160, or other suitable computing devices in other examples. In some examples, the ticket generator 114 generates negative samples, such as an unlinked pair of tickets, where each of the pair of tickets is created within a same short-term processing window (e.g., 4-6 hours), is based on established positive weights for link types (e.g., weights that emphasize tickets within a same team, cross team, cross workload, or other commonly linked criteria), and/or based on at least partial matching of title text (e.g., fuzzy matching of at least 20%). [0031] The Siamese neural network model 205 includes a first neural network model 210 (e.g., a first sub-network) and a second neural network model 220 (e.g., a second sub-network) that are identical to each other (e.g., they have a same configuration with same parameters and weights). The first neural network model 210 is arranged as an input layer 212 and an output layer 214 and receives a first ticket (e.g., ticket 202) of a pair that is processed by the Siamese neural network model 205. The second neural network model 220 receives the second ticket (e.g., ticket 204) of the pair. The input layer 212 is configured to process a first text feature of the plurality of text features for a ticket, while the output layer 214 is configured to process an output of the input layer 212 and any remaining text features of the plurality of text features.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of the Wong in order to using specialized neural network training methods to improve the output of systems (Wong [0025] To improve training of the neural network model 162, the source tickets 164 may include negative samples: samples that might appear to be related, but have been determined to be unrelated. The ticket generator 114 of the computing device 110 may be configured to generate data for training the neural network model 162, for example, by generating negative samples. In some examples, the ticket generator 114 stores the negative samples within the source tickets 164. However, in other examples, the ticket generator 114 dynamically generates the negative samples without storing them within the source tickets 164. This approach may substantially reduce an amount of memory needed to train the neural network model 162 by reducing a number of tickets that are stored in memory. Although the ticket generator 114 is shown as part of the computing device 110, the ticket generator 114 may be incorporated into the computing device 120, into the computing device 160, or other suitable computing devices in other examples. In some examples, the ticket generator 114 generates negative samples, such as an unlinked pair of tickets, where each of the pair of tickets is created within a same short-term processing window (e.g., 4-6 hours), is based on established positive weights for link types (e.g., weights that emphasize tickets within a same team, cross team, cross workload, or other commonly linked criteria), and/or based on at least partial matching of title text (e.g., fuzzy matching of at least 20%). [0031] The Siamese neural network model 205 includes a first neural network model 210 (e.g., a first sub-network) and a second neural network model 220 (e.g., a second sub-network) that are identical to each other (e.g., they have a same configuration with same parameters and weights). The first neural network model 210 is arranged as an input layer 212 and an output layer 214 and receives a first ticket (e.g., ticket 202) of a pair that is processed by the Siamese neural network model 205. The second neural network model 220 receives the second ticket (e.g., ticket 204) of the pair. The input layer 212 is configured to process a first text feature of the plurality of text features for a ticket, while the output layer 214 is configured to process an output of the input layer 212 and any remaining text features of the plurality of text features.) the combination still lacks explicitly and orderly teaching wherein the plurality of indexes includes at least one of a phonemic index or a temporal index; However Mamou teaches wherein the plurality of indexes includes at least one of a phonemic index or a temporal index; (Mamou [0006] An approach for solving the OOV issue consists of converting the speech to phonetic transcripts and representing the query as a sequence of phones. Such transcripts can be generated by expanding the word transcripts into phones using the pronunciation dictionary of the ASR system. This kind of transcript is acceptable to search OOV terms that are phonetically close to in-vocabulary (IV) terms.[0081] Phonetic output is generated using a word-fragment decoder, where word-fragments are defined as variable-length sequences of phones. The decoder generates 1-best word-fragments that are then converted into the corresponding phonetic strings.[0082] Example indices may include a word index on the word confusion network (WCN); a word phone index which phonetic N-gram index of the phonetic representation of the 1-best word decoding; and a phone index a phonetic N-gram index of the 1-best fragment decoding. [FIG.5 in conjunction with FIG.7] shows corresponding visual of querying using a plurality of different indexes) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of Mamou in order to create a more accurate system output via specialized indexes (Mamou [0079] An ASR system is used for transcribing speech data. It works in speaker-independent mode. For best recognition results, an acoustic model and a language model are trained in advance on data with similar characteristics. The ASR system generates word lattices. A compact representation of a word lattice called a word confusion network (WCN) is used. Each edge (u, v) is labeled with a word hypothesis and its posterior probability, i.e., the probability of the word given the signal. One of the main advantages of WCN is that it also provides an alignment for all of the words in the lattice. Although WCNs are more compact than word lattices, in general the 1-best path obtained from WCN has a better word accuracy than the 1-best path obtained from the corresponding word lattice. [0096] In order to control the level of fuzziness, the following two parameters are defined: .delta..sub.i, the maximal number of inserted N-grams and .delta..sub.d, the maximal number of deleted N-grams. Those parameters are used in conjunction with the inverted indices of the phonetic transcript to efficiently find a list of indexed phrases that are different from the query phrase by at most .delta..sub.iinsertions and .delta..sub.ddeletions of N-grams. Note that a substitution is also allowed by an insertion and a deletion. At the end of this stage, a list of fuzzy matches is obtained and for each match, the list of documents in which it appears.) the combination lacks explicitly and orderly teaching all of wherein performing the fuzzification type includes at least one of: generating phonemically similar representations of the parsed representation of the query field based on a phonemic similarity metric;or generating temporally similar representations of the parsed representation of the query field based on a temporal formatting of the parsed representation of the query field; However Gruber teaches wherein performing the fuzzification type includes at least one of: generating phonemically similar representations of the parsed representation of the query field based on a phonemic similarity metric; or generating temporally similar representations of the parsed representation of the query field based on a temporal formatting of the parsed representation of the query field; (Gruber [0008] Some implementations described herein generate phonetic representations for both speech recognition and synthesis based on a single spoken input. By using only a single spoken input to train speech recognition and speech synthesis processes, the number of interactions necessary to train the digital assistant can be reduced, making the digital assistant appear smarter and more human. Moreover, accepting a spoken input instead of requiring the user to type or otherwise select a textual phonetic representation in a phonetic alphabet allows a more human-like interaction with the digital assistant, thus enhancing the user experience and potentially increasing the user's confidence in the capabilities of the digital assistant [0009] Using a single speech input also offers several benefits over techniques that require a user to type in or otherwise select textual phonetic representations of a word. For example, users may be unfamiliar with the particular phonetic alphabet used to train the digital assistant [124] the speech-to-text processor determines the first phonetic representation by processing the speech input using an acoustic model to determine the phonemes in the utterance [0130] Rather than requiring the user to manually identify the text string, the digital assistant may identify the text string automatically. In some implementations, the digital assistant determines the text string using the first phonetic representation (505). This may be accomplished by determining that the utterance corresponds to a certain sequence of letters, even if the digital assistant does not recognize that sequence of letters as a word. For example, a speech recognizer can determine that the phonemes "tuh-may-doe" correspond to the letters "t o m a t o," even if that word is not in the speech recognizer's vocabulary. In some implementations, the digital assistant uses fuzzy matching and/or approximate matching techniques to determine the text string from the first phonetic representation. For example, if a user provides a speech input to a digital assistant asking to call "f-ill-ee-p-ay," but this particular phonetic sequence has not been associated with the name "Philippe," the digital assistant uses fuzzy matching [134-137] elaborate on the matter[FIG.3B] shows corresponding visual) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of Gruber in order to enhance the user experience via specialized phonemically representations (Gruber [0008] Some implementations described herein generate phonetic representations for both speech recognition and synthesis based on a single spoken input. By using only a single spoken input to train speech recognition and speech synthesis processes, the number of interactions necessary to train the digital assistant can be reduced, making the digital assistant appear smarter and more human. Moreover, accepting a spoken input instead of requiring the user to type or otherwise select a textual phonetic representation in a phonetic alphabet allows a more human-like interaction with the digital assistant, thus enhancing the user experience and potentially increasing the user's confidence in the capabilities of the digital assistant.[116] a phonetic representation of the name 402 in a speech recognition alphabet (phonetic representation 404), as well as a phonetic representation of the name 402 in a speech synthesis alphabet (phonetic representation 406). Both the representation 404 in the recognition alphabet and the representation 406 in the synthesis alphabet are based on the same pronunciation, and, therefore, the user's preferred pronunciation will both be accurately recognized by the STT processing module 330 and accurately synthesized by the speech synthesis module 265.[FIG.3B] shows corresponding visual) Regarding claim 2, Japa, Rangan, Wong, Mamou and Gruber teach The computer-implemented method of claim 1, wherein the plurality of indexes further includes a verbatim index. (Rangan [0049] FIG. 1 is a block diagram of an electronic document processing system 100 in one embodiment according to the present invention. In this example, processing system 100 includes master index 105, messaging applications programming interface (MAPI) module 110, e-mail servers 115, duplicate eliminator 120, buffer manager 125, indexer 130, thread analyzer 135, topic classifier 140, analytics extraction, transformation, and loading (ETL) module 145, directory interface 150, and directory servers 155. Master index 105 includes e-mail tables 160, e-mail full text index 165, topic tables 170, cluster full text index 175, distribution list full text index 180, dimension tables 185, participant tables 190, and fact tables 195. E-mail servers 115 include one or more mail servers (e.g., mail server 117). Directory servers 155 include one or more directory servers (e.g., directory server 157). [0050] Master index 105 can include hardware and/or software elements that provide indexing of information associated with electronic documents, such as word processing files, presentation files, databases, e-mail message and attachments, instant messaging (IM) messages, Short Message Service (SMS) messages, Multimedia Message Service (MMS), or the like. Master index 105 may be embodied as one or more flat files, databases, data marts, data warehouses, and other repositories of data. Although the disclosure references specific examples using e-mail messages, the disclosure should not be considered as limited to only e-mail message or electronic messages only. The disclosure is applicable to other types of electronic documents as discussed above [FIG.1] shows overall visual of the index and the querying) Regarding claim 3, Japa, Rangan, Wong, Mamou and Gruber teach The computer-implemented method of claim 1, further comprising: processing an input dataset by identifying a record from the input dataset; (Rangan [0014] In various embodiments, a computer-implemented method for evaluating a search process is provided. Information is received identifying in a collection of documents a first set of documents that satisfy search criteria associated with a first search. A document feature vector is then generated for each document in the first set of documents. Information is received identifying in the documents in the collection of documents that do not satisfy the search criteria associated with the first search a second set of documents that satisfy first sampling criteria.[FIG.2&3] shows an overall visual of receiving documents/records) generating a fuzzified representation of an input field; (Rangan [0152] FIG. 12 is a block diagram illustrating a vector-ordered index associated with semantic space 1200 in one embodiment according to the present invention. In this example, all data vectors in semantic space 1200 are broken into some number of discrete blocks. For the purposes of this discussion, a 4-way block split is considered. Assuming 4K bits in the vector, a 4-way split is shown. Processing system 100 may organize the first block to allow fir an efficient exact comparison of an initial 1024 bits with fuzzy comparison of the rest of the bits. Processing system 100 may further organize the second block where the second set of 1024 bits are positioned first. This allows efficient access to those vectors that have an exact match on the segment 1024-2047 bits but have a fuzzy match on 0-1023 and 2048-4096 bits. By storing four different representations of fuzzy vectors, processing system 100 is able to narrow the search space, and still perform reasonably small number of vector comparisons. [FIG.2] shows an overall visual) generating a vectorized representation of the fuzzified representation of the input field; (Rangan [0151] In further embodiments, given a vector (either term or document vector), processing system 100 may find other vectors and their corresponding objects within a certain cosine distance of the supplied vector. Rather than simply scan an entire vector space linearly, performing a cosine measurement for every enumerated vector, processing system 100 may build vector-ordered storage and indexes to vector-ordered regions. In one embodiment, processing system 100 may split a vector into four equal-width segments and store the vector four times, with ordering based on the segment's order. Processing system 100 then may build four separate in-memory indexes into these segments. [0152] FIG. 12 is a block diagram illustrating a vector-ordered index associated with semantic space 1200 in one embodiment according to the present invention. In this example, all data vectors in semantic space 1200 are broken into some number of discrete blocks. For the purposes of this discussion, a 4-way block split is considered. Assuming 4K bits in the vector, a 4-way split is shown. Processing system 100 may organize the first block to allow fir an efficient exact comparison of an initial 1024 bits with fuzzy comparison of the rest of the bits. Processing system 100 may further organize the second block where the second set of 1024 bits are positioned first. This allows efficient access to those vectors that have an exact match on the segment 1024-2047 bits but have a fuzzy match on 0-1023 and 2048-4096 bits. By storing four different representations of fuzzy vectors, processing system 100 is able to narrow the search space, and still perform reasonably small number of vector comparisons. [FIG.2] shows an overall visual) and indexing the input field in an unstructured database by processing the vectorized representation of the input field. (Rangan [FIG.1] shows the overall system which indexes the data using the vectorized data [0012] In various embodiments, a semantic space associated with a corpus of electronically stored information (ESI) may be created. Documents (and any other objects in the ESI, in general) may be represented as vectors in the semantic space. Vectors may correspond to identifiers, such as, for example, indexed terms. The semantic space for a corpus of ESI can be used in information filtering, information retrieval, indexing, and relevancy rankings. [0050] Master index 105 can include hardware and/or software elements that provide indexing of information...[0082] As noted earlier, concept searching techniques are most applicable when they can reveal semantic meanings of a corpus without a supervised learning phase. One method includes Singular Value Decomposition (SVD) also is known with Latent Semantic Indexing (LSI). LSI is one of the most well-known approaches to semantic evaluation of documents) Regarding claim 4, Japa, Rangan, Wong, Mamou and Gruber teach The computer-implemented method of claim 3, wherein processing the input dataset includes defining a domain model for the input field with a default weighting. (Japa [0018] The encoded vectorial representation of the natural language questions is evaluated under an influence of a plurality of aspects of the contextual information to obtain a plurality of score values. A member of the candidate answer set is selected according to the plurality of score values to obtain a selected one of the candidate answer set as an answer to the natural language question. [0027] neural network, such as a bidirectional encoder representations from transformers (BERT) model. [0042] The QA processing module 202 uses a scoring process to score processed results of the encoded question under the influence of each answer of the candidate answer set. The QA processing module 202 evaluates the scores, e.g., according to a ranking and/or according to a score threshold to distinguish one or more answers from the candidate answer set. For example, the QA processing module 202 may determine independent scores [0076] self multi-head attention mechanism. The matrix of W is a weight matrix. The Q, V, K represent query, value, key vectors, that each multiplies its corresponding weight matrix before getting into the attention function. Repeat this process h times, according to the number of heads, h. Each of the results may be connected to obtain a new vector matrix that reflects a relationship between the query and value vectors Q and V. In particular, the self multi-head attention mechanism [FIG.2B and 2D] shows corresponding visual of system flow) Regarding claim 5, Japa, Rangan, Wong, Mamou and Gruber teach The computer-implemented method of claim 4, wherein scoring the matching input field includes processing a weighting provided in the query. (Japa [0018] The encoded vectorial representation of the natural language questions is evaluated under an influence of a plurality of aspects of the contextual information to obtain a plurality of score values. A member of the candidate answer set is selected according to the plurality of score values to obtain a selected one of the candidate answer set as an answer to the natural language question. [0027] neural network, such as a bidirectional encoder representations from transformers (BERT) model. [0042] The QA processing module 202 uses a scoring process to score processed results of the encoded question under the influence of each answer of the candidate answer set. The QA processing module 202 evaluates the scores, e.g., according to a ranking and/or according to a score threshold to distinguish one or more answers from the candidate answer set. For example, the QA processing module 202 may determine independent scores [0076] self multi-head attention mechanism. The matrix of W is a weight matrix. The Q, V, K represent query, value, key vectors, that each multiplies its corresponding weight matrix before getting into the attention function. Repeat this process h times, according to the number of heads, h. Each of the results may be connected to obtain a new vector matrix that reflects a relationship between the query and value vectors Q and V. In particular, the self multi-head attention mechanism [FIG.2B and 2D] shows corresponding visual of system flow) Regarding claim 6, Japa, Rangan, Wong, Mamou and Gruber teach The computer-implemented method of claim 5, wherein processing the weighting provided in the query includes replacing the default weighting in the domain model for the input field with the weighting provided in the query. (Japa [0052] In at least some embodiments, a rule-based proximity value may be adapted and/or otherwise modified according to a candidate answer set. For example, a number of candidate answers returned from a first proximity value, e.g., 2-hop, may be increased, e.g., to 3-hop, if a number of candidate answers returned according to the first proximity value fails to satisfy a candidate answer set threshold value. It is understood that the proximity value may be increased and/or decreased according to the threshold value.[0057] The KB-QA system 210 can be configured with a scoring module adapted to obtain a similarity score value. The similarity score value may be indicative of a measure of similarity according to the cross-attention module for a particular answer of the candidate answer set. For example, the scoring module may calculate a first similarity score for the first cross-attention result 218 and a second similarity score for the second cross-attention result 219.[0067] A masked language modeling (MLM) or “Cloze test”, may be used to predict the missing tokens from their placeholders in a given sequence, when a subset of tokens Y.Math.X is sampled and substituted with placeholder set of tokens. In a BERT MLM implementation, the value Y may account for some percentage, e.g., 15%, of the tokens in X. Of those, another percentage, e.g., 80%, may be replaced with a [MASK], and another percentage, e.g., 10%, replaced with a random token, e.g., according to a unigram distribution, while another 10% may be kept unchanged. A goal of this approach is to predict a modified input from the original tokens in Y. BERT selects each token in Y independently by randomly selecting a subset.) Regarding claim 7, Japa, Rangan, Wong, Mamou and Gruber teach The computer-implemented method of claim 1, wherein providing the weighted result to the query using the scoring of the matching input field includes: comparing the scoring of the matching input field to a threshold associated with the matching input field; (Japa [0027] The system 100 may be further adapted to determine an answer to the question, e.g., by comparing results of responses of the cross-attention neural networks to the question under the influence of the candidate answer set aspects. For example, the system 100 may calculate a respective similarity score between the question and each corresponding candidate answer set and select a final answer or answers according to the scores [0053] more than one candidate answer sets may be obtained, e.g., according to different proximity values, with each of the candidate answer sets independently processed according to the techniques disclosed herein. A post-processing lay may be applied to separate results, e.g., to compare the separately obtained answer results, e.g., to identify a confidence measure. For example, a greater confidence value may be determined for situations in which the independently obtained results agree.[0058] the KB-QA system 210 includes a ranking module. Then ranking module may be adapted to compare cross-attention results determined for the candidate answers. For example, the ranking module may compare similarity scores determined for each answer to a threshold. If the similarity score is above a score threshold, the candidate answer may be selected as an answer to the question. It is envisioned that in at least some embodiments, more than one answer may be selected based on a threshold comparison. Alternatively or in addition, the ranking module may perform a ranking of the cross-attention results based upon their corresponding similarity score values. For example, an answer and/or answers to the question may be determined according to the ranking. In some embodiments, select of the answer may be determined according to a comparison of scores to a threshold and a ranking.) and providing the weighted result to the query in response to the scoring of the matching input field exceeding the threshold associated with the matching input field. (Japa [0052] In at least some embodiments, a rule-based proximity value may be adapted and/or otherwise modified according to a candidate answer set. For example, a number of candidate answers returned from a first proximity value, e.g., 2-hop, may be increased, e.g., to 3-hop, if a number of candidate answers returned according to the first proximity value fails to satisfy a candidate answer set threshold value. It is understood that the proximity value may be increased and/or decreased according to the threshold value. [0058] In at least some embodiments, the KB-QA system 210 includes a ranking module. Then ranking module may be adapted to compare cross-attention results determined for the candidate answers. For example, the ranking module may compare similarity scores determined for each answer to a threshold. If the similarity score is above a score threshold, the candidate answer may be selected as an answer to the question. It is envisioned that in at least some embodiments, more than one answer may be selected based on a threshold comparison. Alternatively or in addition, the ranking module may perform a ranking of the cross-attention results based upon their corresponding similarity score values. For example, an answer and/or answers to the question may be determined according to the ranking. In some embodiments, select of the answer may be determined according to a comparison of scores to a threshold and a ranking. [0095] Results were obtained by comparing a performance of example technique with other information-retrieval (IR) based approaches. The results are presented in Table 1. Based on the tabulated results, the example approach (LMKB-QA) obtained an F.sub.1 score of 50.9 on Web-Questions using the topic entity predicted by Freebase API. According to Table 1, the proposed technique achieves better results or even competes with state-of-the-art. This demonstrates an effectiveness using BERT pre-trained language model ) Claims 8,9 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over US 20120296891 A1; Rangan; Venkat (hereinafter Rangan in view of US 20140365216 A1; GRUBER; Thomas R. et al. (hereinafter Gruber). Regarding claim 8, Belcher teaches A computing system comprising: a memory; and a processor (Rangan [FIG.21] shows corresponding system with memory and processor) configured to process an input dataset by identifying a record from the input dataset (Rangan [0014] In various embodiments, a computer-implemented method for evaluating a search process is provided. Information is received identifying in a collection of documents a first set of documents that satisfy search criteria associated with a first search. A document feature vector is then generated for each document in the first set of documents. Information is received identifying in the documents in the collection of documents that do not satisfy the search criteria associated with the first search a second set of documents that satisfy first sampling criteria.[FIG.2&3] shows a overall visual of receiving documents/records) to generate a fuzzified representation of an input field (Rangan [0152] FIG. 12 is a block diagram illustrating a vector-ordered index associated with semantic space 1200 in one embodiment according to the present invention. In this example, all data vectors in semantic space 1200 are broken into some number of discrete blocks. For the purposes of this discussion, a 4-way block split is considered. Assuming 4K bits in the vector, a 4-way split is shown. Processing system 100 may organize the first block to allow fir an efficient exact comparison of an initial 1024 bits with fuzzy comparison of the rest of the bits. Processing system 100 may further organize the second block where the second set of 1024 bits are positioned first. This allows efficient access to those vectors that have an exact match on the segment 1024-2047 bits but have a fuzzy match on 0-1023 and 2048-4096 bits. By storing four different representations of fuzzy vectors, processing system 100 is able to narrow the search space, and still perform reasonably small number of vector comparisons. [FIG.2] shows a overall visual) to generate a vectorized representation of the fuzzified representation (Rangan [0151] In further embodiments, given a vector (either term or document vector), processing system 100 may find other vectors and their corresponding objects within a certain cosine distance of the supplied vector. Rather than simply scan an entire vector space linearly, performing a cosine measurement for every enumerated vector, processing system 100 may build vector-ordered storage and indexes to vector-ordered regions. In one embodiment, processing system 100 may split a vector into four equal-width segments and store the vector four times, with ordering based on the segment's order. Processing system 100 then may build four separate in-memory indexes into these segments. [0152] FIG. 12 is a block diagram illustrating a vector-ordered index associated with semantic space 1200 in one embodiment according to the present invention. In this example, all data vectors in semantic space 1200 are broken into some number of discrete blocks. For the purposes of this discussion, a 4-way block split is considered. Assuming 4K bits in the vector, a 4-way split is shown. Processing system 100 may organize the first block to allow fir an efficient exact comparison of an initial 1024 bits with fuzzy comparison of the rest of the bits. Processing system 100 may further organize the second block where the second set of 1024 bits are positioned first. This allows efficient access to those vectors that have an exact match on the segment 1024-2047 bits but have a fuzzy match on 0-1023 and 2048-4096 bits. By storing four different representations of fuzzy vectors, processing system 100 is able to narrow the search space, and still perform reasonably small number of vector comparisons. [FIG.2] shows a overall visual) to index the input field in an unstructured database by processing the vectorized representation of the input field. (Rangan [FIG.1] shows the overall system which indexes the data using the vectorized data [0012] In various embodiments, a semantic space associated with a corpus of electronically stored information (ESI) may be created. Documents (and any other objects in the ESI, in general) may be represented as vectors in the semantic space. Vectors may correspond to identifiers, such as, for example, indexed terms. The semantic space for a corpus of ESI can be used in information filtering, information retrieval, indexing, and relevancy rankings. [0050] Master index 105 can include hardware and/or software elements that provide indexing of information...[0082] As noted earlier, concept searching techniques are most applicable when they can reveal semantic meanings of a corpus without a supervised learning phase. One method includes Singular Value Decomposition (SVD) also is known with Latent Semantic Indexing (LSI). LSI is one of the most well-known approaches to semantic evaluation of documents) Rangan lacks explicitly and orderly teaching performing a fuzzification type comprising at least one of: generating phonemically similar representations associated with the input field based on a phonemic similarity metric;or generating temporally similar representations associated with the input field based on a temporal formatting of the input field However Gruber teaches performing a fuzzification type comprising at least one of: generating phonemically similar representations associated with the input field based on a phonemic similarity metric; or generating temporally similar representations associated with the input field based on a temporal formatting of the input field (Gruber [0008] Some implementations described herein generate phonetic representations for both speech recognition and synthesis based on a single spoken input. By using only a single spoken input to train speech recognition and speech synthesis processes, the number of interactions necessary to train the digital assistant can be reduced, making the digital assistant appear smarter and more human. Moreover, accepting a spoken input instead of requiring the user to type or otherwise select a textual phonetic representation in a phonetic alphabet allows a more human-like interaction with the digital assistant, thus enhancing the user experience and potentially increasing the user's confidence in the capabilities of the digital assistant [0009] Using a single speech input also offers several benefits over techniques that require a user to type in or otherwise select textual phonetic representations of a word. For example, users may be unfamiliar with the particular phonetic alphabet used to train the digital assistant [124] the speech-to-text processor determines the first phonetic representation by processing the speech input using an acoustic model to determine the phonemes in the utterance [0130] Rather than requiring the user to manually identify the text string, the digital assistant may identify the text string automatically. In some implementations, the digital assistant determines the text string using the first phonetic representation (505). This may be accomplished by determining that the utterance corresponds to a certain sequence of letters, even if the digital assistant does not recognize that sequence of letters as a word. For example, a speech recognizer can determine that the phonemes "tuh-may-doe" correspond to the letters "t o m a t o," even if that word is not in the speech recognizer's vocabulary. In some implementations, the digital assistant uses fuzzy matching and/or approximate matching techniques to determine the text string from the first phonetic representation. For example, if a user provides a speech input to a digital assistant asking to call "f-ill-ee-p-ay," but this particular phonetic sequence has not been associated with the name "Philippe," the digital assistant uses fuzzy matching [134-137] elaborate on the matter[FIG.3B] shows corresponding visual) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of Gruber in order to enhance the user experience via specialized phonemically representations (Gruber [0008] Some implementations described herein generate phonetic representations for both speech recognition and synthesis based on a single spoken input. By using only a single spoken input to train speech recognition and speech synthesis processes, the number of interactions necessary to train the digital assistant can be reduced, making the digital assistant appear smarter and more human. Moreover, accepting a spoken input instead of requiring the user to type or otherwise select a textual phonetic representation in a phonetic alphabet allows a more human-like interaction with the digital assistant, thus enhancing the user experience and potentially increasing the user's confidence in the capabilities of the digital assistant.[116] a phonetic representation of the name 402 in a speech recognition alphabet (phonetic representation 404), as well as a phonetic representation of the name 402 in a speech synthesis alphabet (phonetic representation 406). Both the representation 404 in the recognition alphabet and the representation 406 in the synthesis alphabet are based on the same pronunciation, and, therefore, the user's preferred pronunciation will both be accurately recognized by the STT processing module 330 and accurately synthesized by the speech synthesis module 265.[FIG.3B] shows corresponding visual) Regarding claim 9, Rangan and Gruber teach The computing system of claim 8, wherein processing the input dataset includes defining a domain model for the input field with a default weighting. (Rangan [0068] Portal 202 includes software elements for accessing and presenting information provided by the indexer 204. In this example, the portal 202 includes web applications 212 communicatively coupled to information gathering and presentation resources, such as a Java Server Page (JSP) module 214, a query engine 216, a query optimization module 218, an analytics module 220, and a domain templates module 222. [0078] input text into a semantic model, typically by employing a mathematical analysis technique over a representation called vector space model. This model captures a statistical signature of a document, its terms and their occurrences. A matrix derived from the corpus is then analyzed using a Matrix decomposition technique [0080] First are supervised learning systems. In the supervised learning model, an entirely different approach is taken. A main requirement in this model is supplying a previously established collection of documents that constitutes a training set. The training set contains several examples of documents belonging to specific concepts. The learning algorithm analyzes these documents and builds a model [165] boosts or other weighting or ranking influences. In further embodiment, the closest terms identified in step 1425 may be presented as a "preview" for a user to select from. Processing system 100 then may alter generation of the query. Method 1400 continues via step "A" in FIG. 14B. [FIG.1] shows overall visual of the system) Regarding claim 13, Rangan and Gruber teach The computing system of claim 8, wherein indexing the input field in the unstructured database includes indexing the vectorized representation of the input field in a verbatim index. (Rangan [0049] FIG. 1 is a block diagram of an electronic document processing system 100 in one embodiment according to the present invention. In this example, processing system 100 includes master index 105, messaging applications programming interface (MAPI) module 110, e-mail servers 115, duplicate eliminator 120, buffer manager 125, indexer 130, thread analyzer 135, topic classifier 140, analytics extraction, transformation, and loading (ETL) module 145, directory interface 150, and directory servers 155. Master index 105 includes e-mail tables 160, e-mail full text index 165, topic tables 170, cluster full text index 175, distribution list full text index 180, dimension tables 185, participant tables 190, and fact tables 195. E-mail servers 115 include one or more mail servers (e.g., mail server 117). Directory servers 155 include one or more directory servers (e.g., directory server 157). [0050] Master index 105 can include hardware and/or software elements that provide indexing of information associated with electronic documents, such as word processing files, presentation files, databases, e-mail message and attachments, instant messaging (IM) messages, Short Message Service (SMS) messages, Multimedia Message Service (MMS), or the like. Master index 105 may be embodied as one or more flat files, databases, data marts, data warehouses, and other repositories of data. Although the disclosure references specific examples using e-mail messages, the disclosure should not be considered as limited to only e-mail message or electronic messages only. The disclosure is applicable to other types of electronic documents as discussed above [FIG.1] shows overall visual of the index and the querying) Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Rangan in view of Gruber and Japa. Regarding claim 10, Rangan and Gruber teach The computing system of claim 9 Rangan lacks explicitly teaching wherein defining the domain model for the input field includes generating the default weighting using a generative AI model. However Japa teaches wherein defining the domain model for the input field includes generating the default weighting using a generative AI model (Japa [0017] transformer (BERT) incorporates pre-trained language model embeddings to encode a question and candidate answer contexts from a knowledge base. The BERT may be pretrained in a general manner that is not necessarily directed to any particular task and then, subsequently fine-tuned for a question answer (QA) task, using a multi-head attention mechanism that may be based on a convolution neural network encoder.[0025] At least one example approach is referred to herein as a language-model-based knowledge base QA (LM-KBQA) approach. The LM-KBQA approach exploits BERT pre-trained language model embeddings to capture contextual representations. Beneficially, the BERT pre-trained language model embeddings eliminate any need for Recurrent Neural Network (RNN) architectures, such as long-term-short-term memory (LSTM) and/or bidirectional LSTM, which would impose a much greater burden on any underlying KB-QA processing system. In more detail, the LM-KBQA approach may utilize one or more CNN encoders. The CNN encoder(s) may include a self multi-head attention mechanism adapted to fine-tune the BERT embedding for the KB-QA task.[0055] The example KB-QA system 210 includes a BERT language model processor 211 adapted to encode a natural language question received by the question module 212. The BERT language model processor may be pre-trained, e.g., in a general sense, without regard to any particular task, and subsequently fine-tuned according to the QA task. In at least some embodiments, the BERT language model processor 211 is fine-tuned according to a CNN encoder 216 employing multi-head attention for relationship understanding. According to the multi-head attention feature of the CNN encoder 216, the answer embeddings 215 may evaluate one or more different aspects of each of the candidate answers. The question encoding obtained via the BERT language model processor 211 are stored in a question embedding matrix 214. Similarly, the BERT language model processor 211 is adapted to separately encode information of the candidate answer set and store the results according to answer embedding matrixes 215, e.g., a different answer embedding matrix for each answer of the candidate set of answers [FIG.2D] shows visual) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of Japa in order help facilitate performance of system via generative AI methods (Japa [0019] One or more aspects of the subject disclosure include a device including a processing system including a processor and a memory that stores executable. The executable instructions, when executed by the processing system, facilitate performance of operations that include identifying, without human intervention, a main entity of a natural language question, locating, within a knowledge graph, a focal node corresponding to the main entity, and identifying, within the knowledge graph, a candidate answer set including a group of other entities within a predetermined proximity of the focal node.[0108] It is understood that increasing model size when pretraining natural language representations may improve performance on downstream tasks. However, at some point further model increases may become harder due to graphic processing unit (GPU)/tensor processing unit (TPU) memory limitations and longer training times. The system, devices, techniques and software disclosed herein may be applied in a manner that is well suited to operation on or with processing devices having low resources, e.g., limitations of one or more of processing power, memory capacity, storage capacity, supply power, communication channel capacity, and so on. For example, low-memory variants of the BERT language model for self-supervised learning of language representations, such as distillBERT[0160] Some of the embodiments described herein can also employ artificial intelligence (AI) to facilitate automating one or more features described herein. The embodiments (e.g., in connection with automatically identifying acquired cell sites that provide a maximum value/benefit after addition to an existing communication network) can employ various AI-based schemes for carrying out various embodiments...) Claims 11 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Rangan in view of Gruber and Mamou Regarding claim 11, Rangan and Gruber teaches The computing system of claim 8, Rangan lacks explicitly and orderly teaching wherein indexing the input field in the unstructured database includes indexing the vectorized representation of the input field in a phonemic index. However Mamou teaches wherein indexing the input field in the unstructured database includes indexing the vectorized representation of the input field in a phonemic index. (Mamou [0006] An approach for solving the OOV issue consists of converting the speech to phonetic transcripts and representing the query as a sequence of phones. Such transcripts can be generated by expanding the word transcripts into phones using the pronunciation dictionary of the ASR system. This kind of transcript is acceptable to search OOV terms that are phonetically close to in-vocabulary (IV) terms.[0081] Phonetic output is generated using a word-fragment decoder, where word-fragments are defined as variable-length sequences of phones. The decoder generates 1-best word-fragments that are then converted into the corresponding phonetic strings.[0082] Example indices may include a word index on the word confusion network (WCN); a word phone index which phonetic N-gram index of the phonetic representation of the 1-best word decoding; and a phone index a phonetic N-gram index of the 1-best fragment decoding. [FIG.5 in conjunction with FIG.7] shows corresponding visual of querying using a plurality of different indexes) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of Mamou in order to create a more accurate system output via specialized indexes (Mamou [0079] An ASR system is used for transcribing speech data. It works in speaker-independent mode. For best recognition results, an acoustic model and a language model are trained in advance on data with similar characteristics. The ASR system generates word lattices. A compact representation of a word lattice called a word confusion network (WCN) is used. Each edge (u, v) is labeled with a word hypothesis and its posterior probability, i.e., the probability of the word given the signal. One of the main advantages of WCN is that it also provides an alignment for all of the words in the lattice. Although WCNs are more compact than word lattices, in general the 1-best path obtained from WCN has a better word accuracy than the 1-best path obtained from the corresponding word lattice. [0096] In order to control the level of fuzziness, the following two parameters are defined: .delta..sub.i, the maximal number of inserted N-grams and .delta..sub.d, the maximal number of deleted N-grams. Those parameters are used in conjunction with the inverted indices of the phonetic transcript to efficiently find a list of indexed phrases that are different from the query phrase by at most .delta..sub.iinsertions and .delta..sub.ddeletions of N-grams. Note that a substitution is also allowed by an insertion and a deletion. At the end of this stage, a list of fuzzy matches is obtained and for each match, the list of documents in which it appears.) Regarding claim 12, Rangan and Gruber teach The computing system of claim 8, Rangan lacks explicitly and orderly teaching wherein indexing the input field in the unstructured database includes indexing the vectorized representation of the input field in a temporal index. However Momou teaches wherein indexing the input field in the unstructured database includes indexing the vectorized representation of the input field in a temporal index. (Mamou [109] indices may also be combined based on offsets within a transcript instead of timestamps. The combining of the indices based on timestamps or offsets may be carried out by the TA. [0139] The method of using the combined word and sub-word index also permits a ranking model based on temporal proximity. In one embodiment of a ranking model, for OOV term ranking, information provided by the phonetic index is used. A higher rank is given to occurrences of OOV terms that contain phones that are close in time to each other. A scoring function is defined that is related to the average gap in time between the different phones. [FIG.5 in conjunction with FIG.7] shows corresponding visual of querying using a plurality of different indexes) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of Mamou in order to create a more accurate system output via specialized indexes (Mamou [0079] An ASR system is used for transcribing speech data. It works in speaker-independent mode. For best recognition results, an acoustic model and a language model are trained in advance on data with similar characteristics. The ASR system generates word lattices. A compact representation of a word lattice called a word confusion network (WCN) is used. Each edge (u, v) is labeled with a word hypothesis and its posterior probability, i.e., the probability of the word given the signal. One of the main advantages of WCN is that it also provides an alignment for all of the words in the lattice. Although WCNs are more compact than word lattices, in general the 1-best path obtained from WCN has a better word accuracy than the 1-best path obtained from the corresponding word lattice. [0096] In order to control the level of fuzziness, the following two parameters are defined: .delta..sub.i, the maximal number of inserted N-grams and .delta..sub.d, the maximal number of deleted N-grams. Those parameters are used in conjunction with the inverted indices of the phonetic transcript to efficiently find a list of indexed phrases that are different from the query phrase by at most .delta..sub.iinsertions and .delta..sub.ddeletions of N-grams. Note that a substitution is also allowed by an insertion and a deletion. At the end of this stage, a list of fuzzy matches is obtained and for each match, the list of documents in which it appears.) Claims 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over Rangan in view of Japa, Wong, Mamou and Gruber Regarding claim 14, Japa, Rangan, and Gruber teaches generate a fuzzified representation of the parsed representation of the query field; (Rangan [0152] FIG. 12 is a block diagram illustrating a vector-ordered index associated with semantic space 1200 in one embodiment according to the present invention. In this example, all data vectors in semantic space 1200 are broken into some number of discrete blocks. For the purposes of this discussion, a 4-way block split is considered. Assuming 4K bits in the vector, a 4-way split is shown. Processing system 100 may organize the first block to allow fir an efficient exact comparison of an initial 1024 bits with fuzzy comparison of the rest of the bits. Processing system 100 may further organize the second block where the second set of 1024 bits are positioned first. This allows efficient access to those vectors that have an exact match on the segment 1024-2047 bits but have a fuzzy match on 0-1023 and 2048-4096 bits. By storing four different representations of fuzzy vectors, processing system 100 is able to narrow the search space, and still perform reasonably small number of vector comparisons [FIG.2] shows a overall visual) generate a vectorized representation of the fuzzified representation of the field; (Rangan [0151] In further embodiments, given a vector (either term or document vector), processing system 100 may find other vectors and their corresponding objects within a certain cosine distance of the supplied vector. Rather than simply scan an entire vector space linearly, performing a cosine measurement for every enumerated vector, processing system 100 may build vector-ordered storage and indexes to vector-ordered regions. In one embodiment, processing system 100 may split a vector into four equal-width segments and store the vector four times, with ordering based on the segment's order. Processing system 100 then may build four separate in-memory indexes into these segments. [0152] FIG. 12 is a block diagram illustrating a vector-ordered index associated with semantic space 1200 in one embodiment according to the present invention. In this example, all data vectors in semantic space 1200 are broken into some number of discrete blocks. For the purposes of this discussion, a 4-way block split is considered. Assuming 4K bits in the vector, a 4-way split is shown. Processing system 100 may organize the first block to allow fir an efficient exact comparison of an initial 1024 bits with fuzzy comparison of the rest of the bits. Processing system 100 may further organize the second block where the second set of 1024 bits are positioned first. This allows efficient access to those vectors that have an exact match on the segment 1024-2047 bits but have a fuzzy match on 0-1023 and 2048-4096 bits. By storing four different representations of fuzzy vectors, processing system 100 is able to narrow the search space, and still perform reasonably small number of vector comparisons. [FIG.2] shows a overall visual) score the input field based upon, at least in part, weighting from a domain model;and provide a weighted result to the query using the scoring of the input field. (Rangan [0068] Portal 202 includes software elements for accessing and presenting information provided by the indexer 204. In this example, the portal 202 includes web applications 212 communicatively coupled to information gathering and presentation resources, such as a Java Server Page (JSP) module 214, a query engine 216, a query optimization module 218, an analytics module 220, and a domain templates module 222. [0078] input text into a semantic model, typically by employing a mathematical analysis technique over a representation called vector space model. This model captures a statistical signature of a document, its terms and their occurrences. A matrix derived from the corpus is then analyzed using a Matrix decomposition technique [0080] First are supervised learning systems. In the supervised learning model, an entirely different approach is taken. A main requirement in this model is supplying a previously established collection of documents that constitutes a training set. The training set contains several examples of documents belonging to specific concepts. The learning algorithm analyzes these documents and builds a model [165] boosts or other weighting or ranking influences. In further embodiment, the closest terms identified in step 1425 may be presented as a "preview" for a user to select from. Processing system 100 then may alter generation of the query. Method 1400 continues via step "A" in FIG. 14B. [FIG.1] shows overall visual of the system) Rangan lacks explicitly and orderly teaching process a query for obtaining data from an unstructured database; generate a parsed representation of a query field of the query identify the input field from the unstructured database by querying the unstructured database for the vectorized representation of the query field against a plurality of indexes using a vector search mechanism; However Japa teaches process a query for obtaining data from an unstructured database; (Japa [0034] In various embodiments, the content sources 175 include broadcast television and radio sources, video on demand platforms and streaming video and audio services platforms, one or more content data networks, data servers, web servers and other content servers, and/or other sources of media.[0037] The example system 100 further includes a question-answer system adapted to determine answers to natural language questions from information maintained by the knowledge base management system 180. For example, the system 100 may include a question answer (QA) server 183 hosting a back-end question-answer service. The QA server 183 receives a query via the communication network 125, processes the query and generates an answer according to information maintained by the knowledge base management system. In some embodiments, the QA server 183 is collocated [0040] The QA server 183 may process the query to determine an answer and forward the answer to the user via one or more of a voice response via the telephone device 134 and/or via some other mode, such as an email or test message. The answer may be converted from text to voice at the QA server 183, within the communications network 125 for delivery to the user via the voice access 130. [FIG.1 & 3] shows a visual of the overall query system obtaining data from an unstructured database) generate a parsed representation of a query field of the query (Japa [0004] knowledge-based QA systems may accept natural language as a query, offering a more user-friendly solution. There are two primary approaches for the task of QA: (i) semantic parsing based systems (SP-based), and (ii) information retrieval-based systems (IR-based). The SP-based approaches address the QA problem by constructing a semantic parser that converts a natural language question into a conditionally structured expressions, like the logical forms, and then run the query on the knowledge base to obtain the answer. The SP-based approaches generally convert candidate entity-predicate pairs into a query statement and query the knowledge base to obtain an answer. Example SP-based systems may include three modules: (i) an entity linking module, adapted to recognizes all entity mentions in a question and link each mention to an entity in the knowledge base; (ii) a predicate mapping module adapted to find candidate predicates for the question within the knowledge base; and (iii) an answer selection module.[0093] A pre-trained BERT embeddings base uncased version was used for knowledge base-QA training. During tokenization, BERT code uses a word-piece algorithm to split words into sub-words and all less frequent words will be split into two or more sub-words. The vocabulary size of BERT was 30522. A delexicalization strategy was adopted. For each question, the candidate entity mentions those belonging to date, ordinal, or number are replaced with their type. Same is applied on answer context from knowledge base text if the overlap belongs to above type. This assures that the query matches up with answer context in the embedding space.[FIG.2D] shows corresponding visual of system flow) identify the input field from the unstructured database by querying the unstructured database for the vectorized representation of the query field against a plurality of indexes using a vector search mechanism; (Japa [0018] The natural language question and the contextual information of the group of other entities of the candidate answer set are separately encoded to obtain an encoded vectorial representation of the natural language question and a plurality of encoded vectorial representations of the candidate answer set.[0076] These three equations correspond to a formation process of the self multi-head attention mechanism. The matrix of W is a weight matrix. The Q, V, K represent query, value, key vectors, that each multiplies its corresponding weight matrix before getting into the attention function. Repeat this process h times, according to the number of heads, h. Each of the results may be connected to obtain a new vector matrix that reflects a relationship between the query and value vectors Q and V. In particular, the self multi-head attention mechanism, is adapted to expose internal connections within words, with Q=V=K=X, with X representing the word vector matrix.[0077] The multi-head attention mechanism helps the model learn the words relevant information in different presentation sub-spaces. The self-attention mechanism can extract the dependence in words. As the name shows, the self multi-head attention mechanism integrates the benefits of both, creates a context vector for each word. [0160] artificial intelligence (AI) ...approaches comprise, e.g., naïve Bayes, Bayesian networks, decision trees, neural networks, fuzzy logic models, and probabilistic classification models providing different patterns of independence can be employed. Classification as used herein also is inclusive of statistical regression that is utilized to develop models of priority.[FIG.2B and 2D] shows corresponding visual of system flow) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of Japa in order help facilitate performance of system via generative AI methods (Japa [0019] One or more aspects of the subject disclosure include a device including a processing system including a processor and a memory that stores executable. The executable instructions, when executed by the processing system, facilitate performance of operations that include identifying, without human intervention, a main entity of a natural language question, locating, within a knowledge graph, a focal node corresponding to the main entity, and identifying, within the knowledge graph, a candidate answer set including a group of other entities within a predetermined proximity of the focal node.[0108] It is understood that increasing model size when pretraining natural language representations may improve performance on downstream tasks. However, at some point further model increases may become harder due to graphic processing unit (GPU)/tensor processing unit (TPU) memory limitations and longer training times. The system, devices, techniques and software disclosed herein may be applied in a manner that is well suited to operation on or with processing devices having low resources, e.g., limitations of one or more of processing power, memory capacity, storage capacity, supply power, communication channel capacity, and so on. For example, low-memory variants of the BERT language model for self-supervised learning of language representations, such as distillBERT[0160] Some of the embodiments described herein can also employ artificial intelligence (AI) to facilitate automating one or more features described herein. The embodiments (e.g., in connection with automatically identifying acquired cell sites that provide a maximum value/benefit after addition to an existing communication network) can employ various AI-based schemes for carrying out various embodiments...) The combination lacks explicitly and orderly teaching wherein a fuzzification type to perform on the parsed representation of the query field is determined based on a weighting assigned to the query field; However Wong teaches wherein a fuzzification type to perform on the parsed representation of the query field is determined based on a weighting assigned to the query field; (Wong [0025] To improve training of the neural network model 162, the source tickets 164 may include negative samples: samples that might appear to be related, but have been determined to be unrelated. The ticket generator 114 of the computing device 110 may be configured to generate data for training the neural network model 162, for example, by generating negative samples. In some examples, the ticket generator 114 stores the negative samples within the source tickets 164. However, in other examples, the ticket generator 114 dynamically generates the negative samples without storing them within the source tickets 164. This approach may substantially reduce an amount of memory needed to train the neural network model 162 by reducing a number of tickets that are stored in memory. Although the ticket generator 114 is shown as part of the computing device 110, the ticket generator 114 may be incorporated into the computing device 120, into the computing device 160, or other suitable computing devices in other examples. In some examples, the ticket generator 114 generates negative samples, such as an unlinked pair of tickets, where each of the pair of tickets is created within a same short-term processing window (e.g., 4-6 hours), is based on established positive weights for link types (e.g., weights that emphasize tickets within a same team, cross team, cross workload, or other commonly linked criteria), and/or based on at least partial matching of title text (e.g., fuzzy matching of at least 20%). [0031] The Siamese neural network model 205 includes a first neural network model 210 (e.g., a first sub-network) and a second neural network model 220 (e.g., a second sub-network) that are identical to each other (e.g., they have a same configuration with same parameters and weights). The first neural network model 210 is arranged as an input layer 212 and an output layer 214 and receives a first ticket (e.g., ticket 202) of a pair that is processed by the Siamese neural network model 205. The second neural network model 220 receives the second ticket (e.g., ticket 204) of the pair. The input layer 212 is configured to process a first text feature of the plurality of text features for a ticket, while the output layer 214 is configured to process an output of the input layer 212 and any remaining text features of the plurality of text features.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of the Wong in order to using specialized neural network training methods to improve the output of systems (Wong [0025] To improve training of the neural network model 162, the source tickets 164 may include negative samples: samples that might appear to be related, but have been determined to be unrelated. The ticket generator 114 of the computing device 110 may be configured to generate data for training the neural network model 162, for example, by generating negative samples. In some examples, the ticket generator 114 stores the negative samples within the source tickets 164. However, in other examples, the ticket generator 114 dynamically generates the negative samples without storing them within the source tickets 164. This approach may substantially reduce an amount of memory needed to train the neural network model 162 by reducing a number of tickets that are stored in memory. Although the ticket generator 114 is shown as part of the computing device 110, the ticket generator 114 may be incorporated into the computing device 120, into the computing device 160, or other suitable computing devices in other examples. In some examples, the ticket generator 114 generates negative samples, such as an unlinked pair of tickets, where each of the pair of tickets is created within a same short-term processing window (e.g., 4-6 hours), is based on established positive weights for link types (e.g., weights that emphasize tickets within a same team, cross team, cross workload, or other commonly linked criteria), and/or based on at least partial matching of title text (e.g., fuzzy matching of at least 20%). [0031] The Siamese neural network model 205 includes a first neural network model 210 (e.g., a first sub-network) and a second neural network model 220 (e.g., a second sub-network) that are identical to each other (e.g., they have a same configuration with same parameters and weights). The first neural network model 210 is arranged as an input layer 212 and an output layer 214 and receives a first ticket (e.g., ticket 202) of a pair that is processed by the Siamese neural network model 205. The second neural network model 220 receives the second ticket (e.g., ticket 204) of the pair. The input layer 212 is configured to process a first text feature of the plurality of text features for a ticket, while the output layer 214 is configured to process an output of the input layer 212 and any remaining text features of the plurality of text features.) the combination still lacks explicitly and orderly teaching wherein the plurality of indexes includes at least one of a phonemic index or a temporal index; However Mamou teaches wherein the plurality of indexes includes at least one of a phonemic index or a temporal index; (Mamou [0006] An approach for solving the OOV issue consists of converting the speech to phonetic transcripts and representing the query as a sequence of phones. Such transcripts can be generated by expanding the word transcripts into phones using the pronunciation dictionary of the ASR system. This kind of transcript is acceptable to search OOV terms that are phonetically close to in-vocabulary (IV) terms.[0081] Phonetic output is generated using a word-fragment decoder, where word-fragments are defined as variable-length sequences of phones. The decoder generates 1-best word-fragments that are then converted into the corresponding phonetic strings.[0082] Example indices may include a word index on the word confusion network (WCN); a word phone index which phonetic N-gram index of the phonetic representation of the 1-best word decoding; and a phone index a phonetic N-gram index of the 1-best fragment decoding. [FIG.5 in conjunction with FIG.7] shows corresponding visual of querying using a plurality of different indexes) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of Mamou in order to create a more accurate system output via specialized indexes (Mamou [0079] An ASR system is used for transcribing speech data. It works in speaker-independent mode. For best recognition results, an acoustic model and a language model are trained in advance on data with similar characteristics. The ASR system generates word lattices. A compact representation of a word lattice called a word confusion network (WCN) is used. Each edge (u, v) is labeled with a word hypothesis and its posterior probability, i.e., the probability of the word given the signal. One of the main advantages of WCN is that it also provides an alignment for all of the words in the lattice. Although WCNs are more compact than word lattices, in general the 1-best path obtained from WCN has a better word accuracy than the 1-best path obtained from the corresponding word lattice. [0096] In order to control the level of fuzziness, the following two parameters are defined: .delta..sub.i, the maximal number of inserted N-grams and .delta..sub.d, the maximal number of deleted N-grams. Those parameters are used in conjunction with the inverted indices of the phonetic transcript to efficiently find a list of indexed phrases that are different from the query phrase by at most .delta..sub.iinsertions and .delta..sub.ddeletions of N-grams. Note that a substitution is also allowed by an insertion and a deletion. At the end of this stage, a list of fuzzy matches is obtained and for each match, the list of documents in which it appears.) Regarding claim 15, Rangan teaches A computer program product residing on a computer readable medium having a plurality of instructions stored thereon which, when executed by a processor, cause the processor to perform operations comprising: (Rangan [FIG.21] shows corresponding system with memory and processor with a computer readable medium having a plurality of instructions stored thereon which, when executed by a processor, cause the processor to perform operations) processing an input dataset by identifying a record from an input dataset; (Rangan [0014] In various embodiments, a computer-implemented method for evaluating a search process is provided. Information is received identifying in a collection of documents a first set of documents that satisfy search criteria associated with a first search. A document feature vector is then generated for each document in the first set of documents. Information is received identifying in the documents in the collection of documents that do not satisfy the search criteria associated with the first search a second set of documents that satisfy first sampling criteria.[FIG.2&3] shows a overall visual of receiving documents/records) generating a fuzzified representation of an input field in the record; (Rangan [0152] FIG. 12 is a block diagram illustrating a vector-ordered index associated with semantic space 1200 in one embodiment according to the present invention. In this example, all data vectors in semantic space 1200 are broken into some number of discrete blocks. For the purposes of this discussion, a 4-way block split is considered. Assuming 4K bits in the vector, a 4-way split is shown. Processing system 100 may organize the first block to allow fir an efficient exact comparison of an initial 1024 bits with fuzzy comparison of the rest of the bits. Processing system 100 may further organize the second block where the second set of 1024 bits are positioned first. This allows efficient access to those vectors that have an exact match on the segment 1024-2047 bits but have a fuzzy match on 0-1023 and 2048-4096 bits. By storing four different representations of fuzzy vectors, processing system 100 is able to narrow the search space, and still perform reasonably small number of vector comparisons. [FIG.2] shows a overall visual) generating a vectorized representation of the fuzzified representation of the input field; (Rangan [0151] In further embodiments, given a vector (either term or document vector), processing system 100 may find other vectors and their corresponding objects within a certain cosine distance of the supplied vector. Rather than simply scan an entire vector space linearly, performing a cosine measurement for every enumerated vector, processing system 100 may build vector-ordered storage and indexes to vector-ordered regions. In one embodiment, processing system 100 may split a vector into four equal-width segments and store the vector four times, with ordering based on the segment's order. Processing system 100 then may build four separate in-memory indexes into these segments. [0152] FIG. 12 is a block diagram illustrating a vector-ordered index associated with semantic space 1200 in one embodiment according to the present invention. In this example, all data vectors in semantic space 1200 are broken into some number of discrete blocks. For the purposes of this discussion, a 4-way block split is considered. Assuming 4K bits in the vector, a 4-way split is shown. Processing system 100 may organize the first block to allow fir an efficient exact comparison of an initial 1024 bits with fuzzy comparison of the rest of the bits. Processing system 100 may further organize the second block where the second set of 1024 bits are positioned first. This allows efficient access to those vectors that have an exact match on the segment 1024-2047 bits but have a fuzzy match on 0-1023 and 2048-4096 bits. By storing four different representations of fuzzy vectors, processing system 100 is able to narrow the search space, and still perform reasonably small number of vector comparisons. [FIG.2] shows a overall visual) and indexing the input field in an unstructured database by processing the vectorized representation of the input field; (Rangan [FIG.1] shows the overall system which indexes the data using the vectorized data [0012] In various embodiments, a semantic space associated with a corpus of electronically stored information (ESI) may be created. Documents (and any other objects in the ESI, in general) may be represented as vectors in the semantic space. Vectors may correspond to identifiers, such as, for example, indexed terms. The semantic space for a corpus of ESI can be used in information filtering, information retrieval, indexing, and relevancy rankings. [0050] Master index 105 can include hardware and/or software elements that provide indexing of information...[0082] As noted earlier, concept searching techniques are most applicable when they can reveal semantic meanings of a corpus without a supervised learning phase. One method includes Singular Value Decomposition (SVD) also is known with Latent Semantic Indexing (LSI). LSI is one of the most well-known approaches to semantic evaluation of documents) generating a fuzzified representation of the parsed representation of the query field; (Rangan [0152] FIG. 12 is a block diagram illustrating a vector-ordered index associated with semantic space 1200 in one embodiment according to the present invention. In this example, all data vectors in semantic space 1200 are broken into some number of discrete blocks. For the purposes of this discussion, a 4-way block split is considered. Assuming 4K bits in the vector, a 4-way split is shown. Processing system 100 may organize the first block to allow fir an efficient exact comparison of an initial 1024 bits with fuzzy comparison of the rest of the bits. Processing system 100 may further organize the second block where the second set of 1024 bits are positioned first. This allows efficient access to those vectors that have an exact match on the segment 1024-2047 bits but have a fuzzy match on 0-1023 and 2048-4096 bits. By storing four different representations of fuzzy vectors, processing system 100 is able to narrow the search space, and still perform reasonably small number of vector comparisons [FIG.2] shows an overall visual) scoring the input field based upon, at least in part, weighting from a domain model associated with the input field; and providing a weighted result to the query using the scoring of the input field. (Rangan [0068] Portal 202 includes software elements for accessing and presenting information provided by the indexer 204. In this example, the portal 202 includes web applications 212 communicatively coupled to information gathering and presentation resources, such as a Java Server Page (JSP) module 214, a query engine 216, a query optimization module 218, an analytics module 220, and a domain templates module 222. [0078] input text into a semantic model, typically by employing a mathematical analysis technique over a representation called vector space model. This model captures a statistical signature of a document, its terms and their occurrences. A matrix derived from the corpus is then analyzed using a Matrix decomposition technique [0080] First are supervised learning systems. In the supervised learning model, an entirely different approach is taken. A main requirement in this model is supplying a previously established collection of documents that constitutes a training set. The training set contains several examples of documents belonging to specific concepts. The learning algorithm analyzes these documents and builds a model [165] boosts or other weighting or ranking influences. In further embodiment, the closest terms identified in step 1425 may be presented as a "preview" for a user to select from. Processing system 100 then may alter generation of the query. Method 1400 continues via step "A" in FIG. 14B. [FIG.1] shows overall visual of the system) Rangan lacks explicitly and orderly teaching processing a query for obtaining data from an unstructured database; generating a parsed representation of a query field by parsing the field in the query; generating a vectorized representation of the fuzzified representation of the query field; identifying the input field from the unstructured database by querying the unstructured database for the vectorized representation of the query field against a plurality of indexes using a vector search mechanism; However Japa teaches processing a query for obtaining data from an unstructured database; (Japa [0034] In various embodiments, the content sources 175 include broadcast television and radio sources, video on demand platforms and streaming video and audio services platforms, one or more content data networks, data servers, web servers and other content servers, and/or other sources of media.[0037] The example system 100 further includes a question-answer system adapted to determine answers to natural language questions from information maintained by the knowledge base management system 180. For example, the system 100 may include a question answer (QA) server 183 hosting a back-end question-answer service. The QA server 183 receives a query via the communication network 125, processes the query and generates an answer according to information maintained by the knowledge base management system. In some embodiments, the QA server 183 is collocated [0040] The QA server 183 may process the query to determine an answer and forward the answer to the user via one or more of a voice response via the telephone device 134 and/or via some other mode, such as an email or test message. The answer may be converted from text to voice at the QA server 183, within the communications network 125 for delivery to the user via the voice access 130. [FIG.1 & 3] shows a visual of the overall query system obtaining data from an unstructured database) generating a parsed representation of a query field by parsing the field in the query; (Japa [0004] knowledge-based QA systems may accept natural language as a query, offering a more user-friendly solution. There are two primary approaches for the task of QA: (i) semantic parsing based systems (SP-based), and (ii) information retrieval-based systems (IR-based). The SP-based approaches address the QA problem by constructing a semantic parser that converts a natural language question into a conditionally structured expressions, like the logical forms, and then run the query on the knowledge base to obtain the answer. The SP-based approaches generally convert candidate entity-predicate pairs into a query statement and query the knowledge base to obtain an answer. Example SP-based systems may include three modules: (i) an entity linking module, adapted to recognizes all entity mentions in a question and link each mention to an entity in the knowledge base; (ii) a predicate mapping module adapted to find candidate predicates for the question within the knowledge base; and (iii) an answer selection module.[0093] A pre-trained BERT embeddings base uncased version was used for knowledge base-QA training. During tokenization, BERT code uses a word-piece algorithm to split words into sub-words and all less frequent words will be split into two or more sub-words. The vocabulary size of BERT was 30522. A delexicalization strategy was adopted. For each question, the candidate entity mentions those belonging to date, ordinal, or number are replaced with their type. Same is applied on answer context from knowledge base text if the overlap belongs to above type. This assures that the query matches up with answer context in the embedding space.[FIG.2D] shows corresponding visual of system flow) generating a vectorized representation of the fuzzified representation of the query field; identifying the input field from the unstructured database by querying the unstructured database for the vectorized representation of the query field against a plurality of indexes using a vector search mechanism; (Japa [0018] The natural language question and the contextual information of the group of other entities of the candidate answer set are separately encoded to obtain an encoded vectorial representation of the natural language question and a plurality of encoded vectorial representations of the candidate answer set.[0076] These three equations correspond to a formation process of the self multi-head attention mechanism. The matrix of W is a weight matrix. The Q, V, K represent query, value, key vectors, that each multiplies its corresponding weight matrix before getting into the attention function. Repeat this process h times, according to the number of heads, h. Each of the results may be connected to obtain a new vector matrix that reflects a relationship between the query and value vectors Q and V. In particular, the self multi-head attention mechanism, is adapted to expose internal connections within words, with Q=V=K=X, with X representing the word vector matrix.[0077] The multi-head attention mechanism helps the model learn the words relevant information in different presentation sub-spaces. The self-attention mechanism can extract the dependence in words. As the name shows, the self multi-head attention mechanism integrates the benefits of both, creates a context vector for each word. [0160] artificial intelligence (AI) ...approaches comprise, e.g., naïve Bayes, Bayesian networks, decision trees, neural networks, fuzzy logic models, and probabilistic classification models providing different patterns of independence can be employed. Classification as used herein also is inclusive of statistical regression that is utilized to develop models of priority.[FIG.2B and 2D] shows corresponding visual of system flow) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of Japa in order help facilitate performance of system via generative AI methods (Japa [0019] One or more aspects of the subject disclosure include a device including a processing system including a processor and a memory that stores executable. The executable instructions, when executed by the processing system, facilitate performance of operations that include identifying, without human intervention, a main entity of a natural language question, locating, within a knowledge graph, a focal node corresponding to the main entity, and identifying, within the knowledge graph, a candidate answer set including a group of other entities within a predetermined proximity of the focal node.[0108] It is understood that increasing model size when pretraining natural language representations may improve performance on downstream tasks. However, at some point further model increases may become harder due to graphic processing unit (GPU)/tensor processing unit (TPU) memory limitations and longer training times. The system, devices, techniques and software disclosed herein may be applied in a manner that is well suited to operation on or with processing devices having low resources, e.g., limitations of one or more of processing power, memory capacity, storage capacity, supply power, communication channel capacity, and so on. For example, low-memory variants of the BERT language model for self-supervised learning of language representations, such as distillBERT[0160] Some of the embodiments described herein can also employ artificial intelligence (AI) to facilitate automating one or more features described herein. The embodiments (e.g., in connection with automatically identifying acquired cell sites that provide a maximum value/benefit after addition to an existing communication network) can employ various AI-based schemes for carrying out various embodiments...) The combination lacks explicitly and orderly teaching wherein a fuzzification type to perform on the parsed representation of the query field is determined based on a weighting assigned to the query field; However Wong teaches wherein a fuzzification type to perform on the parsed representation of the query field is determined based on a weighting assigned to the query field; (Wong [0025] To improve training of the neural network model 162, the source tickets 164 may include negative samples: samples that might appear to be related, but have been determined to be unrelated. The ticket generator 114 of the computing device 110 may be configured to generate data for training the neural network model 162, for example, by generating negative samples. In some examples, the ticket generator 114 stores the negative samples within the source tickets 164. However, in other examples, the ticket generator 114 dynamically generates the negative samples without storing them within the source tickets 164. This approach may substantially reduce an amount of memory needed to train the neural network model 162 by reducing a number of tickets that are stored in memory. Although the ticket generator 114 is shown as part of the computing device 110, the ticket generator 114 may be incorporated into the computing device 120, into the computing device 160, or other suitable computing devices in other examples. In some examples, the ticket generator 114 generates negative samples, such as an unlinked pair of tickets, where each of the pair of tickets is created within a same short-term processing window (e.g., 4-6 hours), is based on established positive weights for link types (e.g., weights that emphasize tickets within a same team, cross team, cross workload, or other commonly linked criteria), and/or based on at least partial matching of title text (e.g., fuzzy matching of at least 20%). [0031] The Siamese neural network model 205 includes a first neural network model 210 (e.g., a first sub-network) and a second neural network model 220 (e.g., a second sub-network) that are identical to each other (e.g., they have a same configuration with same parameters and weights). The first neural network model 210 is arranged as an input layer 212 and an output layer 214 and receives a first ticket (e.g., ticket 202) of a pair that is processed by the Siamese neural network model 205. The second neural network model 220 receives the second ticket (e.g., ticket 204) of the pair. The input layer 212 is configured to process a first text feature of the plurality of text features for a ticket, while the output layer 214 is configured to process an output of the input layer 212 and any remaining text features of the plurality of text features.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of the Wong in order to using specialized neural network training methods to improve the output of systems (Wong [0025] To improve training of the neural network model 162, the source tickets 164 may include negative samples: samples that might appear to be related, but have been determined to be unrelated. The ticket generator 114 of the computing device 110 may be configured to generate data for training the neural network model 162, for example, by generating negative samples. In some examples, the ticket generator 114 stores the negative samples within the source tickets 164. However, in other examples, the ticket generator 114 dynamically generates the negative samples without storing them within the source tickets 164. This approach may substantially reduce an amount of memory needed to train the neural network model 162 by reducing a number of tickets that are stored in memory. Although the ticket generator 114 is shown as part of the computing device 110, the ticket generator 114 may be incorporated into the computing device 120, into the computing device 160, or other suitable computing devices in other examples. In some examples, the ticket generator 114 generates negative samples, such as an unlinked pair of tickets, where each of the pair of tickets is created within a same short-term processing window (e.g., 4-6 hours), is based on established positive weights for link types (e.g., weights that emphasize tickets within a same team, cross team, cross workload, or other commonly linked criteria), and/or based on at least partial matching of title text (e.g., fuzzy matching of at least 20%). [0031] The Siamese neural network model 205 includes a first neural network model 210 (e.g., a first sub-network) and a second neural network model 220 (e.g., a second sub-network) that are identical to each other (e.g., they have a same configuration with same parameters and weights). The first neural network model 210 is arranged as an input layer 212 and an output layer 214 and receives a first ticket (e.g., ticket 202) of a pair that is processed by the Siamese neural network model 205. The second neural network model 220 receives the second ticket (e.g., ticket 204) of the pair. The input layer 212 is configured to process a first text feature of the plurality of text features for a ticket, while the output layer 214 is configured to process an output of the input layer 212 and any remaining text features of the plurality of text features.) the combination still lacks explicitly and orderly teaching wherein the plurality of indexes includes at least one of a phonemic index or a temporal index; However Mamou teaches wherein the plurality of indexes includes at least one of a phonemic index or a temporal index; (Mamou [0006] An approach for solving the OOV issue consists of converting the speech to phonetic transcripts and representing the query as a sequence of phones. Such transcripts can be generated by expanding the word transcripts into phones using the pronunciation dictionary of the ASR system. This kind of transcript is acceptable to search OOV terms that are phonetically close to in-vocabulary (IV) terms.[0081] Phonetic output is generated using a word-fragment decoder, where word-fragments are defined as variable-length sequences of phones. The decoder generates 1-best word-fragments that are then converted into the corresponding phonetic strings.[0082] Example indices may include a word index on the word confusion network (WCN); a word phone index which phonetic N-gram index of the phonetic representation of the 1-best word decoding; and a phone index a phonetic N-gram index of the 1-best fragment decoding. [FIG.5 in conjunction with FIG.7] shows corresponding visual of querying using a plurality of different indexes) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of Mamou in order to create a more accurate system output via specialized indexes (Mamou [0079] An ASR system is used for transcribing speech data. It works in speaker-independent mode. For best recognition results, an acoustic model and a language model are trained in advance on data with similar characteristics. The ASR system generates word lattices. A compact representation of a word lattice called a word confusion network (WCN) is used. Each edge (u, v) is labeled with a word hypothesis and its posterior probability, i.e., the probability of the word given the signal. One of the main advantages of WCN is that it also provides an alignment for all of the words in the lattice. Although WCNs are more compact than word lattices, in general the 1-best path obtained from WCN has a better word accuracy than the 1-best path obtained from the corresponding word lattice. [0096] In order to control the level of fuzziness, the following two parameters are defined: .delta..sub.i, the maximal number of inserted N-grams and .delta..sub.d, the maximal number of deleted N-grams. Those parameters are used in conjunction with the inverted indices of the phonetic transcript to efficiently find a list of indexed phrases that are different from the query phrase by at most .delta..sub.iinsertions and .delta..sub.ddeletions of N-grams. Note that a substitution is also allowed by an insertion and a deletion. At the end of this stage, a list of fuzzy matches is obtained and for each match, the list of documents in which it appears.) The combination lacks explicitly and orderly teaching by performing a fuzzification type comprising at least one of generating phonemically similar representations associated with the input field based on a phonemic similarity metric;or generating temporally similar representations associated with the input field based on a temporal formatting of the input field; However Gruber teaches by performing a fuzzification type comprising at least one of generating phonemically similar representations associated with the input field based on a phonemic similarity metric;or generating temporally similar representations associated with the input field based on a temporal formatting of the input field; (Gruber [0008] Some implementations described herein generate phonetic representations for both speech recognition and synthesis based on a single spoken input. By using only a single spoken input to train speech recognition and speech synthesis processes, the number of interactions necessary to train the digital assistant can be reduced, making the digital assistant appear smarter and more human. Moreover, accepting a spoken input instead of requiring the user to type or otherwise select a textual phonetic representation in a phonetic alphabet allows a more human-like interaction with the digital assistant, thus enhancing the user experience and potentially increasing the user's confidence in the capabilities of the digital assistant [0009] Using a single speech input also offers several benefits over techniques that require a user to type in or otherwise select textual phonetic representations of a word. For example, users may be unfamiliar with the particular phonetic alphabet used to train the digital assistant [124] the speech-to-text processor determines the first phonetic representation by processing the speech input using an acoustic model to determine the phonemes in the utterance [0130] Rather than requiring the user to manually identify the text string, the digital assistant may identify the text string automatically. In some implementations, the digital assistant determines the text string using the first phonetic representation (505). This may be accomplished by determining that the utterance corresponds to a certain sequence of letters, even if the digital assistant does not recognize that sequence of letters as a word. For example, a speech recognizer can determine that the phonemes "tuh-may-doe" correspond to the letters "t o m a t o," even if that word is not in the speech recognizer's vocabulary. In some implementations, the digital assistant uses fuzzy matching and/or approximate matching techniques to determine the text string from the first phonetic representation. For example, if a user provides a speech input to a digital assistant asking to call "f-ill-ee-p-ay," but this particular phonetic sequence has not been associated with the name "Philippe," the digital assistant uses fuzzy matching [134-137] elaborate on the matter[FIG.3B] shows corresponding visual) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of Gruber in order to enhance the user experience via specialized phonemically representations (Gruber [0008] Some implementations described herein generate phonetic representations for both speech recognition and synthesis based on a single spoken input. By using only a single spoken input to train speech recognition and speech synthesis processes, the number of interactions necessary to train the digital assistant can be reduced, making the digital assistant appear smarter and more human. Moreover, accepting a spoken input instead of requiring the user to type or otherwise select a textual phonetic representation in a phonetic alphabet allows a more human-like interaction with the digital assistant, thus enhancing the user experience and potentially increasing the user's confidence in the capabilities of the digital assistant.[116] a phonetic representation of the name 402 in a speech recognition alphabet (phonetic representation 404), as well as a phonetic representation of the name 402 in a speech synthesis alphabet (phonetic representation 406). Both the representation 404 in the recognition alphabet and the representation 406 in the synthesis alphabet are based on the same pronunciation, and, therefore, the user's preferred pronunciation will both be accurately recognized by the STT processing module 330 and accurately synthesized by the speech synthesis module 265.[FIG.3B] shows corresponding visual) Regarding claim 16, Japa, Rangan, Wong, Gruber and Mamou teach The computer program product of claim 15, wherein the plurality of indexes further includes a verbatim index; (Rangan [0049] FIG. 1 is a block diagram of an electronic document processing system 100 in one embodiment according to the present invention. In this example, processing system 100 includes master index 105, messaging applications programming interface (MAPI) module 110, e-mail servers 115, duplicate eliminator 120, buffer manager 125, indexer 130, thread analyzer 135, topic classifier 140, analytics extraction, transformation, and loading (ETL) module 145, directory interface 150, and directory servers 155. Master index 105 includes e-mail tables 160, e-mail full text index 165, topic tables 170, cluster full text index 175, distribution list full text index 180, dimension tables 185, participant tables 190, and fact tables 195. E-mail servers 115 include one or more mail servers (e.g., mail server 117). Directory servers 155 include one or more directory servers (e.g., directory server 157). [0050] Master index 105 can include hardware and/or software elements that provide indexing of information associated with electronic documents, such as word processing files, presentation files, databases, e-mail message and attachments, instant messaging (IM) messages, Short Message Service (SMS) messages, Multimedia Message Service (MMS), or the like. Master index 105 may be embodied as one or more flat files, databases, data marts, data warehouses, and other repositories of data. Although the disclosure references specific examples using e-mail messages, the disclosure should not be considered as limited to only e-mail message or electronic messages only. The disclosure is applicable to other types of electronic documents as discussed above [FIG.1] shows overall visual of the index and the querying) Regarding claim 17, Japa, Rangan, Wong, Gruber and Mamou teach The computer program product of claim 15, wherein processing the input dataset includes defining a domain model for the input field with a default weighting. (Japa [0018] The encoded vectorial representation of the natural language questions is evaluated under an influence of a plurality of aspects of the contextual information to obtain a plurality of score values. A member of the candidate answer set is selected according to the plurality of score values to obtain a selected one of the candidate answer set as an answer to the natural language question. [0027] neural network, such as a bidirectional encoder representations from transformers (BERT) model. [0042] The QA processing module 202 uses a scoring process to score processed results of the encoded question under the influence of each answer of the candidate answer set. The QA processing module 202 evaluates the scores, e.g., according to a ranking and/or according to a score threshold to distinguish one or more answers from the candidate answer set. For example, the QA processing module 202 may determine independent scores [0076] self multi-head attention mechanism. The matrix of W is a weight matrix. The Q, V, K represent query, value, key vectors, that each multiplies its corresponding weight matrix before getting into the attention function. Repeat this process h times, according to the number of heads, h. Each of the results may be connected to obtain a new vector matrix that reflects a relationship between the query and value vectors Q and V. In particular, the self multi-head attention mechanism [FIG.2B and 2D] shows corresponding visual of system flow) Regarding claim 18, Japa, Rangan, Wong, Gruber and Mamou teach The computer program product of claim 17, wherein scoring the matching input field includes processing a weighting provided in the query. (Japa [0018] The encoded vectorial representation of the natural language questions is evaluated under an influence of a plurality of aspects of the contextual information to obtain a plurality of score values. A member of the candidate answer set is selected according to the plurality of score values to obtain a selected one of the candidate answer set as an answer to the natural language question. [0027] neural network, such as a bidirectional encoder representations from transformers (BERT) model. [0042] The QA processing module 202 uses a scoring process to score processed results of the encoded question under the influence of each answer of the candidate answer set. The QA processing module 202 evaluates the scores, e.g., according to a ranking and/or according to a score threshold to distinguish one or more answers from the candidate answer set. For example, the QA processing module 202 may determine independent scores [0076] self multi-head attention mechanism. The matrix of W is a weight matrix. The Q, V, K represent query, value, key vectors, that each multiplies its corresponding weight matrix before getting into the attention function. Repeat this process h times, according to the number of heads, h. Each of the results may be connected to obtain a new vector matrix that reflects a relationship between the query and value vectors Q and V. In particular, the self multi-head attention mechanism [FIG.2B and 2D] shows corresponding visual of system flow) Regarding claim 19, Japa, Rangan, Wong, Gruber and Mamou teach The computer program product of claim 18, wherein processing the weighting provided in the query includes replacing the default weighting in the domain model for the input field with the weighting provided in the query. (Japa [0018] The encoded vectorial representation of the natural language questions is evaluated under an influence of a plurality of aspects of the contextual information to obtain a plurality of score values. A member of the candidate answer set is selected according to the plurality of score values to obtain a selected one of the candidate answer set as an answer to the natural language question. [0027] neural network, such as a bidirectional encoder representations from transformers (BERT) model. [0042] The QA processing module 202 uses a scoring process to score processed results of the encoded question under the influence of each answer of the candidate answer set. The QA processing module 202 evaluates the scores, e.g., according to a ranking and/or according to a score threshold to distinguish one or more answers from the candidate answer set. For example, the QA processing module 202 may determine independent scores [0076] self multi-head attention mechanism. The matrix of W is a weight matrix. The Q, V, K represent query, value, key vectors, that each multiplies its corresponding weight matrix before getting into the attention function. Repeat this process h times, according to the number of heads, h. Each of the results may be connected to obtain a new vector matrix that reflects a relationship between the query and value vectors Q and V. In particular, the self multi-head attention mechanism [FIG.2B and 2D] shows corresponding visual of system flow) Regarding claim 20, Japa, Rangan, Wong, Gruber and Mamou teach The computer program product of claim 15, wherein providing the weighted result to the query using the scoring of the matching input field includes: comparing the scoring of the matching input field to a threshold associated with the matching input field; (Japa [0027] The system 100 may be further adapted to determine an answer to the question, e.g., by comparing results of responses of the cross-attention neural networks to the question under the influence of the candidate answer set aspects. For example, the system 100 may calculate a respective similarity score between the question and each corresponding candidate answer set and select a final answer or answers according to the scores [0053] more than one candidate answer sets may be obtained, e.g., according to different proximity values, with each of the candidate answer sets independently processed according to the techniques disclosed herein. A post-processing lay may be applied to separate results, e.g., to compare the separately obtained answer results, e.g., to identify a confidence measure. For example, a greater confidence value may be determined for situations in which the independently obtained results agree.[0058] the KB-QA system 210 includes a ranking module. Then ranking module may be adapted to compare cross-attention results determined for the candidate answers. For example, the ranking module may compare similarity scores determined for each answer to a threshold. If the similarity score is above a score threshold, the candidate answer may be selected as an answer to the question. It is envisioned that in at least some embodiments, more than one answer may be selected based on a threshold comparison. Alternatively or in addition, the ranking module may perform a ranking of the cross-attention results based upon their corresponding similarity score values. For example, an answer and/or answers to the question may be determined according to the ranking. In some embodiments, select of the answer may be determined according to a comparison of scores to a threshold and a ranking.) and providing the weighted result to the query in response to the scoring of the matching input field exceeding the threshold associated with the matching input field. (Japa [0052] In at least some embodiments, a rule-based proximity value may be adapted and/or otherwise modified according to a candidate answer set. For example, a number of candidate answers returned from a first proximity value, e.g., 2-hop, may be increased, e.g., to 3-hop, if a number of candidate answers returned according to the first proximity value fails to satisfy a candidate answer set threshold value. It is understood that the proximity value may be increased and/or decreased according to the threshold value. [0058] In at least some embodiments, the KB-QA system 210 includes a ranking module. Then ranking module may be adapted to compare cross-attention results determined for the candidate answers. For example, the ranking module may compare similarity scores determined for each answer to a threshold. If the similarity score is above a score threshold, the candidate answer may be selected as an answer to the question. It is envisioned that in at least some embodiments, more than one answer may be selected based on a threshold comparison. Alternatively or in addition, the ranking module may perform a ranking of the cross-attention results based upon their corresponding similarity score values. For example, an answer and/or answers to the question may be determined according to the ranking. In some embodiments, select of the answer may be determined according to a comparison of scores to a threshold and a ranking. [0095] Results were obtained by comparing a performance of example technique with other information-retrieval (IR) based approaches. The results are presented in Table 1. Based on the tabulated results, the example approach (LMKB-QA) obtained an F.sub.1 score of 50.9 on Web-Questions using the topic entity predicted by Freebase API. According to Table 1, the proposed technique achieves better results or even competes with state-of-the-art. This demonstrates an effectiveness using BERT pre-trained language model) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARYAN D TOUGHIRY whose telephone number is (571)272-5212. The examiner can normally be reached Monday - Friday, 9 am - 5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aleksandr Kerzhner can be reached at (571) 270-1760. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ARYAN D TOUGHIRY/Examiner, Art Unit 2165 /ALEKSANDR KERZHNER/Supervisory Patent Examiner, Art Unit 2165
Read full office action

Prosecution Timeline

Apr 16, 2024
Application Filed
Mar 10, 2025
Non-Final Rejection — §103
Jul 17, 2025
Response Filed
Oct 01, 2025
Final Rejection — §103
Dec 30, 2025
Interview Requested
Jan 06, 2026
Applicant Interview (Telephonic)
Jan 06, 2026
Examiner Interview Summary
Jan 07, 2026
Request for Continued Examination
Jan 23, 2026
Response after Non-Final Action
Feb 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602374
DATA ACQUISITION METHOD AND APPARATUS, COMPUTER DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12596596
USER-SPACE PARALLEL ACCESS CHANNEL FOR TRADITIONAL FILESYSTEM USING CAPI TECHNOLOGY
2y 5m to grant Granted Apr 07, 2026
Patent 12579141
GENERATING QUERY ANSWERS FROM A USER'S HISTORY
2y 5m to grant Granted Mar 17, 2026
Patent 12572390
SYSTEMS AND METHODS FOR ADAPTIVE WEIGHTING OF MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 10, 2026
Patent 12573292
VEHICLE IDENTIFICATION USING ADVANCED DRIVER ASSISTANCE SYSTEMS (ADAS)
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
88%
With Interview (+19.9%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 189 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month