Prosecution Insights
Last updated: April 19, 2026
Application No. 18/808,891

SYSTEMS AND METHODS FOR AUTOMATED INFORMATION RETRIEVAL

Final Rejection §103§DP
Filed
Aug 19, 2024
Examiner
TOUGHIRY, ARYAN D
Art Unit
2165
Tech Center
2100 — Computer Architecture & Software
Assignee
AT&T Intellectual Property I, L.P.
OA Round
2 (Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
88%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
128 granted / 189 resolved
+12.7% vs TC avg
Strong +20% interview lift
Without
With
+19.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
17 currently pending
Career history
206
Total Applications
across all art units

Statute-Specific Performance

§101
7.0%
-33.0% vs TC avg
§103
64.4%
+24.4% vs TC avg
§102
14.9%
-25.1% vs TC avg
§112
7.0%
-33.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 189 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 10/31/2025 have been fully considered Double Patenting: These issues have been resolved and the rejection has been withdrawn in light of the terminal disclaimer filed 10/31/2025. 35 USC § 101: These issues have been resolved and the rejection has been withdrawn in light of the amendments and arguments. 35 USC § 102 & 35 USC § 103: In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Primary prior art Zeiler teaches "a combination term vector for a term comprising a group of words, the generating comprising combining a group of term vectors based on a combination rule" (see paragraph 30,47 and figure 4). Secondary prior art He teaches "using word vectors representing the words of the group of words of the term" (see paragraph 16,64 and 122). Under Broadest reasonable interpretation (BRI), the new amendment " to improve an accuracy of a search result" is function for language that describes a purpose but for purposes of examination it carries little weight as a method which performs the said action it is connected to "searching a data store using the combination term vector via a machine learning model" is enough to teach the limitation in that the improve of the result is defined by the limitation of action before it not by it’s motivation to perform the action. In this situation of the current prior rejection Zeiler teaches "searching a data store using the combination term vector via a machine learning model" (see Zeiler paragraphs 17,18,30, figure 4). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3,5-6,8-10 and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over US 20180089556 A1; ZEILER; Matthew et al. (hereinafter Zeiler) in view of US 20170060844 A1 He; Xiaodong et al. (hereinafter He). Regarding claim 1, Zeiler teaches A method comprising: generating, by a processor, a combination term vector for a term comprising a group of words, the generating comprising combining a group of term vectors based on a combination rule, (Zeiler[0030] perform vector summation or negation on vectors based on one or more logical operators of the request to generate a resulting vector. As an example, the request may include query terms, and the query terms may be connected by one or more logical operators. A vector summation may be performed on two vectors based on each of the two vectors being respectively generated from two query terms connected by a logical AND operator [0047] A vector summation may be performed on two vectors based on each of the two vectors being respectively generated from two query terms connected by a logical AND operator[18-20] teaches the machine learning/neural network methods (corresponding parameters/rules) which are applied for plurality of vectors for generation and combination [FIG.4] step 401,404 and 406 shows visual ) wherein respective term vectors of the group of term vectors are generated based on different term vector rules, (Zieler [0023] models to generate one or more vectors that may be used to map data to a vector space. In some embodiments, model subsystem 114 may provide a content item (or a portion thereof) to a prediction model to cause the prediction model to generate a vector. Vector space subsystem 112 may map the content item (or the portion thereof) to the vector space based on the generated vector. As an example, the content item may include one or more data types. In some embodiments, with respect to a content item including multiple data types...[18-20] teaches the machine learning/neural network methods (corresponding parameters/rules) which are applied for plurality of vectors for generation and combination [23-24] teach generation of the word/term vectors which are based on the corresponding parameters/rules) querying, by the processor, a data store using the combination term vector via a machine learning model to improve an accuracy of a search result; and receiving, by the processor from the data store, the search result based on the combination term vector. (Zeiler [0017] In some embodiments, system 100 may facilitate prediction-model-based mapping and/or search using a multi-data-type vector space. The prediction models may include neural networks, other machine learning models, or other prediction models. As an example, neural networks may be based on a large collection of neural units (or artificial neurons). Neural networks may loosely mimic the manner in which a biological brain works (e.g., via large clusters of biological neurons connected by axons). Each neural unit of a neural network may be connected with many other neural units of the neural network...[0018] In some embodiments, the search and indexing of multiple different data types may be unified into one search space. In some embodiments, machine learning models (such as neural networks) may be trained to map that data into vectors. The training process may attempt to map through multiple neural networks or other machine learning models...[0030] Based on the resulting vector, vector space subsystem 112 may predict one or more locations within a vector space (at where requested results are mapped) [FIG.4] step 408 and 410 shows visual [20-22] elaborate on querying by the system a data store using the vector and getting said result based on vector ) Zeiler lacks explicitly and orderly teaching using word vectors representing the words of the group of words of the term; He teaches using word vectors representing the words of the group of words of the term; (He [0016] The computing system can sequentially process each word in the input, and, using a neural network, map the words one-by-one into in a hidden semantic space, replete with historical information. After each word is mapped, the computing system can determine a semantic representation for the input based on the value and dimensionality of the resulting natural embedding vector [0064] the mapping module, such as mapping module depicted in block 222, can map each word of the word sequence 302 sequentially from W.sub.1 to W.sub.4 (e.g., from left to right) to determine a value of the resulting hidden vector[122] further elaborates on using all word vectors) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of He in order to improve and optimize the searching ability of the system (He [0016] The technologies described herein provide techniques and constructs to improve the relevance of responses to inputs (e.g., query, question, request for information, etc.), such as providing a relevant response to a search query, by optimizing the semantic similarity between the input and the response. In some examples, a computing system can receive an input for information comprising one or more words (e.g., a bag of words, sentence, etc.). The computing system can sequentially process each word in the input, and, using a neural network, map the words one-by-one into in a hidden semantic space, replete with historical information. After each word is mapped, the computing system can determine a semantic representation for the input based on the value and dimensionality of the resulting natural embedding vector.[0112] For another example, in an RNN-LSTM model, the computing system can process each word sequentially through an LSTM cell to determine the resulting hidden vector. The LSTM cell can apply an input gate parameter, an output gate parameter, and a forget gate parameter during the mapping process. In such an example, the LSTM cell can attenuate unimportant words, and can emphasize keywords in the hidden vector mapping. As such, the resulting semantic representation may be of increased accuracy.) Corresponding system claim 8 is rejected similarly as claim 1 above. Additional Limitations: Device with processor(s) and memory (Zeiler [FIG.1] shows corresponding system ) Corresponding product claim 15 is rejected similarly as claim 1 above. Additional Limitations: computer readable medium capable of reading and executing instructions ( Zeiler [0076-77] computer readable medium capable of reading and executing instructions ) Regarding claim 2, the combination of Zeiler and He teach The method of claim 1, wherein the combination rule provides that the combination term vector is generated at least in part by concatenating the group of term vectors. (Zeiler[0030] perform vector summation or negation on vectors based on one or more logical operators of the request to generate a resulting vector. As an example, the request may include query terms, and the query terms may be connected by one or more logical operators. A vector summation may be performed on two vectors based on each of the two vectors being respectively generated from two query terms connected by a logical AND operator [0047] A vector summation may be performed on two vectors based on each of the two vectors being respectively generated from two query terms connected by a logical AND operator[18-20] teaches the machine learning/neural network methods (corresponding parameters/rules) which are applied for plurality of vectors for generation and combination [FIG.4] step 401,404 and 406 shows visual ) Corresponding system claim 9 is rejected similarly as claim 2 above. Corresponding product claim 16 is rejected similarly as claim 2 above. Regarding claim 3, the combination of Zeiler and He teach The method of claim 2, wherein the combination rule provides that the combination term vector is generated at least in part by concatenating the group of term vectors in a defined order. (He [0048] For example, with an input of two words, the mapping module can apply a first parameter to the vector associated with the first word of an input, and a second parameter to the initial hidden vector of the input, the initial hidden vector being associated with the first word of the input. In response to the application of the first parameter to the vector and the second parameter to the initial hidden vector, the mapping module can calculate a first hidden vector. The mapping module can then apply the first parameter to the second word of the input and the second parameter to the first hidden vector to calculate a second hidden vector [0061] In various examples, an initial mapping module, such as initial mapping module depicted in block 220, can calculate a vector X for each word of the word sequence [0105] The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be executed in any order, combined in any order) Corresponding system claim 10 is rejected similarly as claim 3 above. Corresponding product claim 17 is rejected similarly as claim 3 above. Regarding claim 5, the combination of Zeiler and He teach The method of claim 1, wherein the term vector rules comprise an elementwise mean value rule, and wherein the elementwise mean value rule provides that each element of a term vector comprises a mean value based on corresponding elements of the word vectors. (He [0100] shows elementwise operation with an elementwise mean value rule corresponding to the word/term vectors) Corresponding system claim 12 is rejected similarly as claim 5 above. Corresponding product claim 19 is rejected similarly as claim 5 above. Regarding claim 6, the combination of Zeiler and He teach The method of claim 5, wherein the elementwise mean value rule is an elementwise weighted mean value rule providing a weighted mean value using respective weights for the corresponding elements. (He [0100] shows elementwise operation with an elementwise mean value rule corresponding to the word/term vectors and corresponding a weighted mean value using respective weights for the corresponding elements.) Corresponding system claim 13 is rejected similarly as claim 6 above Corresponding product claim 20 is rejected similarly as claim 6 above. Claims 4,11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over US 20180089556 A1; ZEILER; Matthew et al. (hereinafter Zeiler) in view of US 20170060844 A1 He; Xiaodong et al. (hereinafter He) and Ma; Yiming et al.; US 20200201908 A1 (hereinafter Ma) Regarding claim 4, the combination of Zeiler and He teach The method of claim 1 the combination lack explicitly and orderly teaching wherein the term vector rules comprise at least one of an elementwise minimum rule or an elementwise maximum rule, wherein the elementwise minimum rule provides that each element of a term vector comprises a minimum value of corresponding elements of the word vectors, and wherein the elementwise maximum rule provides that each element of the term vector comprises a maximum value of corresponding elements of the word vectors. However Ma teaches wherein the term vector rules comprise at least one of an elementwise minimum rule or an elementwise maximum rule, wherein the elementwise minimum rule provides that each element of a term vector comprises a minimum value of corresponding elements of the word vectors, and wherein the elementwise maximum rule provides that each element of the term vector comprises a maximum value of corresponding elements of the word vectors. (Ma [0039] Analysis apparatus 204 then aggregates intermediate vectors 214 into embeddings 220 for the corresponding entity IDs 210. For example, analysis apparatus 204 may generate an embedding for an entity ID as the element-wise minimum, maximum, sum, average, variance, and/or standard deviation of all intermediate vectors 214 produced from the entity ID. [0057] An element-wise aggregation 342 is applied to intermediate vectors 330-334 to produce embedding 346, and another element-wise aggregation 344 is applied to intermediate vectors 336-340 to produce embedding 348. For example, element-wise aggregations 342-344 may compute embeddings 346-348 as the element-wise maximums from the corresponding intermediate vectors 330-340. In turn, embedding 346 may represent member ID 302, and embedding 348 may represent job ID 304.[0064] An element-wise aggregation of the intermediate vectors is then used to produce an embedding that is outputted for use by a machine learning model. For example, a sum, average, maximum, minimum, and/or other type of aggregation may be applied to corresponding elements of the intermediate vectors to produce an embedding with the same dimensionality as the intermediate vectors. ) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of Ma in order to improve the efficiency and utilization of IDs for the system. (Ma [0004] machine learning and/or analytics may be facilitated by mechanisms for improving the creation, profiling, management, sharing, and reuse of features and/or machine learning models.[0012] IDs for entities in the online system may span a dimensional space of millions to billions, which may be too sparse and and/or large for efficient use with one-hot encoding and/or one-to-one mapping of the IDs to embeddings. [0013] To improve use of entity IDs with machine learning models, the disclosed embodiments use a set of hash functions to convert each entity ID into a set of hash values. Each hash value is used as an index into a lookup table for the corresponding hash function, and the entry represented by the index in the lookup table is obtained as an intermediate vector representation of the entity ID. [0041] At the same time, the use of multiple hash functions 208, lookup tables 210, and/or element-wise aggregations to generate embeddings 220 from entity IDs 210 may reduce the likelihood that the same embedding is produced from multiple entity IDs 210.the element-wise aggregation of three intermediate vectors 214 into embeddings 220 may increase the likelihood that a given entity ID is uniquely represented at each dimension of the resulting embedding.) Corresponding system claim 11 is rejected similarly as claim 4 above. Corresponding product claim 18 is rejected similarly as claim 4 above. Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over US 20180089556 A1; ZEILER; Matthew et al. (hereinafter Zeiler) in view of US 20170060844 A1 He; Xiaodong et al. (hereinafter He) and US 20190243914 A1; Lugowski; Adam (hereinafter Lugowski). Regarding claim 7, the combination of Zeiler and He teach The method of claim 6, The combination lack explicitly and orderly teaching wherein the respective weights are based on inverse document frequencies associated with the words. However Lugowski teaches wherein the respective weights are based on inverse document frequencies associated with the words. (Lugowski [0080] In some embodiments, proximity may be determined via a term frequency-inverse document frequency (TFIDF) model. In information retrieval, TFIDF is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus. The TFIDF value increases proportionally to the number of times a word appears in the document, but is often offset by the frequency of the word in the corpus, which helps to adjust for the fact that some words appear more frequently in general. In a TFIDF approach, the dimensionality index may include each word that appears in any document included in a corpus data object. The dimensionality index may also indicate the number of documents in which each word occurs. Thus, after vectorization, the weight of a term that occurs in a document may be proportional to both the term frequency in the document and an inverse function of the number of documents in which it occurs. Similar schemes may also be used, including Term Frequency Proportional Document Frequency.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of Lugowski in order to create a more accurate system/output via helping adjust for the fact that some words appear more frequently in general (Adam [0080] In some embodiments, proximity may be determined via a term frequency-inverse document frequency (TFIDF) model. In information retrieval, TFIDF is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus. The TFIDF value increases proportionally to the number of times a word appears in the document, but is often offset by the frequency of the word in the corpus, which helps to adjust for the fact that some words appear more frequently in general. In a TFIDF approach, the dimensionality index may include each word that appears in any document included in a corpus data object. The dimensionality index may also indicate the number of documents in which each word occurs. Thus, after vectorization, the weight of a term that occurs in a document may be proportional to both the term frequency in the document and an inverse function of the number of documents in which it occurs. Similar schemes may also be used, including Term Frequency Proportional Document Frequency.) Corresponding system claim 14 is rejected similarly as claim 7 above. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARYAN D TOUGHIRY whose telephone number is (571)272-5212. The examiner can normally be reached Monday - Friday, 9 am - 5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aleksandr Kerzhner can be reached at (571) 270-1760. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ARYAN D TOUGHIRY/Examiner, Art Unit 2165 /ALEKSANDR KERZHNER/Supervisory Patent Examiner, Art Unit 2165
Read full office action

Prosecution Timeline

Aug 19, 2024
Application Filed
Jul 29, 2025
Non-Final Rejection — §103, §DP
Oct 31, 2025
Response Filed
Jan 09, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602374
DATA ACQUISITION METHOD AND APPARATUS, COMPUTER DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12596596
USER-SPACE PARALLEL ACCESS CHANNEL FOR TRADITIONAL FILESYSTEM USING CAPI TECHNOLOGY
2y 5m to grant Granted Apr 07, 2026
Patent 12579141
GENERATING QUERY ANSWERS FROM A USER'S HISTORY
2y 5m to grant Granted Mar 17, 2026
Patent 12572390
SYSTEMS AND METHODS FOR ADAPTIVE WEIGHTING OF MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 10, 2026
Patent 12573292
VEHICLE IDENTIFICATION USING ADVANCED DRIVER ASSISTANCE SYSTEMS (ADAS)
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
88%
With Interview (+19.9%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 189 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month