Prosecution Insights
Last updated: April 19, 2026
Application No. 18/543,798

DATA SOURCE MAPPER FOR ENHANCED DATA RETRIEVAL

Final Rejection §103
Filed
Dec 18, 2023
Examiner
CHEUNG, HUBERT G
Art Unit
2152
Tech Center
2100 — Computer Architecture & Software
Assignee
Intuit Inc.
OA Round
4 (Final)
63%
Grant Probability
Moderate
5-6
OA Rounds
4y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
246 granted / 390 resolved
+8.1% vs TC avg
Strong +49% interview lift
Without
With
+49.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
23 currently pending
Career history
413
Total Applications
across all art units

Statute-Specific Performance

§101
11.6%
-28.4% vs TC avg
§103
47.9%
+7.9% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 390 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office action is in response to the amendments, arguments and remarks, filed on 12/24/2025, in which claim(s) 1-7 and 9-20 is/are presented for further examination. Claim(s) 1-3, 5-7, 9, 12-15 and 17-20 has/have been amended. Claim(s) 8 has/have been previously cancelled. Response to Amendment Applicant’s amendment(s) to claim(s) 1-3, 5-7, 9, 12-15 and 17-20 has/have been accepted. The examiner thanks applicant’s representative for pointing out where s/he believes there is support for the amendment(s). Response to Arguments Applicant’s arguments with respect to claim(s) 1-7 and 9-20, filed on 12/24/2025, have been fully considered but they are not persuasive. Accordingly, this action has been made FINAL. Applicant’s arguments with respect to the rejection(s) of claim(s) 1, 13 and 20, under 35 U.S.C. 103, see the middle of page 7 to the top of page 9 of applicant’s remarks, filed on 12/24/2025, have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-5, 11-17 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ferrucci et al., US 8,301,438 B2 (hereinafter “Ferrucci”) in view of Wang et al., US 12,271,698 B1 (hereinafter “Wang”). Claims 1, 13 and 20 Ferrucci is a method for enhanced electronic data retrieval, comprising: receiving a natural language query via a user interface (Ferrucci, Fig. 1, see Natural language questions inputted into Question processing module 10; and Ferrucci, Col. 10, lines 58-67, see input/output interface 705 including a keyboard, a mouse or the like); based on training data that comprises natural language strings associated with labels indicating entity names (Ferrucci, Col. 5, line 64-Col. 6, line 37 see natural language question “In this 1992 Robert Altman film, Tim Robbins gets angry messages from a screenwriter he's snubbed” [i.e., “natural language string”], named entities “Robert_Altman” and “Tim Robbins” [i.e., “entity names”] can be detected, and information “film” related to the type of the answer and time verification information “1992” related to the answer can be extracted, step S305 for linked database retrieval, search is performed in different data sources such as linked data of DBpedia and IMDB [i.e., “one or more electronic data sources”] based on the named entities detected in named entity detection step S301); generating, via a language processing machine learning model (Ferrucci, Col. 9, lines 41-51, see the apparatus for processing natural language questions according to a preferred embodiment of the present invention may further include a training module (not shown in the figure), which is configured to perform machine learning in advance according to given features of candidate answers, so as to obtain a satisfying scoring model. Accordingly, when the candidate evaluation module 507 synthesizes the values of features of candidate answers, a score can be computed for each candidate answer using the trained scoring model, and the candidate answer with the highest score can be selected as the final answer provided to the user), a response to the natural language query based on the data related to the natural language query (Ferrucci, Col. 6, lines 4-20, see in step S305 for linked database retrieval, search is performed in different data sources such as linked data of DBpedia and IMDB based on the named entities detected in named entity detection step S301. Next, in candidate answer generation step S307, a candidate answer is generated based on a search result from linked database retrieval step S305l and Ferrucci, Fig. 1 see Answer processing module 105, Answer formulating and sending Brief answer). Ferrucci does not appear to explicitly disclose generating, based on an input comprising the natural langue query, a given output that indicates names of one or more electronic data endpoints using a named entity recognition (NER) machine learning model trained through a supervised learning process; retrieving data related to the natural language query by transmitting requests to the one or more electronic data endpoints and the one or more additional electronic data endpoints, wherein the requests to the one or more additional electronic data endpoints are transmitted based on a semantic similarity comparison between embedding representations of the one or more electronic data endpoints and embedding representations of the traditional electronic data endpoints stored in a knowledge graph that maps relationships among electronic data endpoints. Wang discloses generating, based on an input comprising the natural langue query, a given output (Wang, Col. 4, lines 13-26, see natural language query 140 may be received as (or transcribed into) a text string, in some embodiments, which may be processed by natural language query processing system 110 into an intermediate representation (according to the various techniques discussed below with regard to FIGS. 2-8 ). The intermediate representation may then be used to generative the appropriate queries, requests, or other interactions with storage systems that store data sets 120 in order to generate a desired result for natural language query, which may be provided as indicated at 150. Such a result 150 may be returned as a text-based result and/or may be used to generate various result displays (e.g., various charts, graphs, or other visualizations of data that answers the natural language query) as result 150) that indicates names of one or more electronic data endpoints using a named entity recognition (NER) machine learning model trained through a supervised learning process (Wang, Col. 4, lines 27-48, see natural language query processing system 110 may implement natural language query processing pipeline 130, which may be used to process natural language query 140 in order to provide result 150. Natural language query processing pipeline 130 may include various different features, components, or stages, such as NER model 132, which may determine entities in natural language query 140, entity linking 134, which may link the entities predicted by NER model 132 to data sets 120 [i.e., data sets are the “endpoints”], data set selection 136 which may determine which ones data sets 120 should be used to satisfy entity linkages, and query generation 138 which may take the selected data sets, entities, and other information produced as part of natural language query processing pipeline 130 to generate quer(ies) in query language(s) (e.g., using various syntax, protocols, interfaces, or other parameters) in order to perform the quer(ies) to data sets 120. One example of a natural language query processing pipeline 130 is discussed in detail below with regard to FIGS. 2-6 , however the techniques described with respect to NER model 132 can be integrated into various other NL query processing pipelines or workflows, and thus the following example is not intended to be limiting); retrieving data related to the natural language query by transmitting requests to the one or more electronic data endpoints and the one or more additional electronic data endpoints (Wang, Col. 27, lines 3-15, see, at740, the quer(ies) may be performed to at least one of the data set(s) to return a result to the natural language query via the interface of the natural language query processing system; and Wang, Fig. 7, step 740 “Perform quer(ies) to at least one of the data set(s) to return a result to the natural language query, the quer(ies) being generated in a query language using the entity prediction generated by the NER machine learning model”), wherein the requests to the one or more additional electronic data endpoints are transmitted based on a semantic similarity comparison between embedding representations of the one or more electronic data endpoints and embedding representations of the traditional electronic data endpoints stored in a knowledge graph that maps relationships among electronic data endpoints (Wang, Col. 5, line 39-Col. 6, line 12, see matching type filter 166 may inform token-level matching type embedding 162. Token level matching type embedding 162 may consider whether a token is part of a span of one (or more) words in the natural language query that has an exact match (e.g., the words “weekly sales” is a span in a natural language query that has an exact match returned as part of fuzzy search 161 of a table column that is “weekly sales”). For those tokens corresponding to words in the exact match, an embedding type may be added to the token embedding input into encoder 163. Matching type filter list 166 may indicate to token-level matching type embedding 162 which matching types are found. Consider the previous example, out of the tokens for “Show me weekly sales per product for the last 4 weeks” a matching type embedding may be added to the token for “weekly” and the token for “sales.” Different types of matching types may be added according to the type of the match, such as “column”, “cell_value”, “literal_value”, “multiple_match”, column custom aggregation”, “column time”, “column integer”, “column attribute” and “column number.” Note that tokens that do not have a matching type may not have an embedding added or may have an embedding added indicating no match type (e.g., “none”); and Wang, Col. 6, lines 37-47, see matching type filter list 166 may also be used to perform post-processing on entity predictions on spans provided by span classifiers 165. For example, match and overlap filtering 167 may filter out predicted entities that overlap with exactly matched entities, in some embodiments. A prediction of a “sales” entity may overlap with a “weekly sales” entity. If the “weekly sales” entity had an exact match [i.e., “semantic similarity”], as indicated by a matching type that (as opposed to a matching type of “none”) then the “sales” entity may be removed from the entity prediction provided for natural language query by NER model 132). Ferrucci and Wang are analogous art because they are from the same problem-solving area of identifying entities. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, having the teachings of Ferrucci and Wang before him/her, to modify the entity identification of Ferrucci to include the named entity recognition (NER) machine learning model of Wang because it would allow the efficient training and expanding of entity information. The suggestion/motivation for doing so would have been to provide performance and analysis benefits in different data sets being spread across many different locations and types of storage systems, see Wang, Col. 1, lines 8-27. Therefore, it would have been obvious to combine Wang with Ferrucci to obtain the invention as specified in the instant claim(s). Claim(s) 13 and 20 recite(s) similar limitations to claim 1 and is/are rejected under the same rationale. With respect to claim 13, Ferrucci discloses a system for enhanced electronic data retrieval, comprising: one or more hardware processors (Ferrucci, Col. 10, lines 20-29, see processors); and a non-transitory memory storing instructions (Ferrucci, Col. 10, lines 20-29, see storage medium). With respect to claim 20, Ferrucci discloses a non-transitory computer readable storage medium comprising instructions (Ferrucci, Col. 10, lines 20-29, see storage medium). Claims 2 and 14 With respect to claims 2 and 14, the combination of Ferrucci and Wang discloses wherein generating the given output comprises: providing the natural language query as an input to the NER machine learning model (Ferrucci, Fig. 1, see Natural language questions inputted into Question processing module 10; and Park, [0204] see below); and receiving, as an output from the NER machine learning model in response to the input, a syntax tree indicating names of the one or more electronic data endpoints (Wang, Col. 4, lines 27-48, see natural language query processing system 110 may implement natural language query processing pipeline 130, which may be used to process natural language query 140 in order to provide result 150. Natural language query processing pipeline 130 may include various different features, components, or stages, such as NER model 132, which may determine entities in natural language query 140, entity linking 134, which may link the entities predicted by NER model 132 to data sets 120 [i.e., data sets are the “endpoints”], data set selection 136 which may determine which ones data sets 120 should be used to satisfy entity linkages, and query generation 138 which may take the selected data sets, entities, and other information produced as part of natural language query processing pipeline 130 to generate quer(ies) in query language(s) (e.g., using various syntax, protocols, interfaces, or other parameters) in order to perform the quer(ies) to data sets 120. One example of a natural language query processing pipeline 130 is discussed in detail below with regard to FIGS. 2-6 , however the techniques described with respect to NER model 132 can be integrated into various other NL query processing pipelines or workflows, and thus the following example is not intended to be limiting). Claims 3 and 15 With respect to claims 3 and 15, the combination of Ferrucci and Wang discloses wherein generating the given output further comprises mapping the names of the one or more electronic data endpoints to addresses of the one or more electronic data endpoints (Wang, Col. 4, lines 27-48, see natural language query processing system 110 may implement natural language query processing pipeline 130, which may be used to process natural language query 140 in order to provide result 150. Natural language query processing pipeline 130 may include various different features, components, or stages, such as NER model 132, which may determine entities in natural language query 140, entity linking 134, which may link the entities predicted by NER model 132 to data sets 120 [i.e., data sets are the “endpoints” and where they are located in the network are the “addresses”], data set selection 136 which may determine which ones data sets 120 should be used to satisfy entity linkages, and query generation 138 which may take the selected data sets, entities, and other information produced as part of natural language query processing pipeline 130 to generate quer(ies) in query language(s) (e.g., using various syntax, protocols, interfaces, or other parameters) in order to perform the quer(ies) to data sets 120. One example of a natural language query processing pipeline 130 is discussed in detail below with regard to FIGS. 2-6 , however the techniques described with respect to NER model 132 can be integrated into various other NL query processing pipelines or workflows, and thus the following example is not intended to be limiting). Claims 4 and 16 With respect to claims 4 and 16, the combination of Ferrucci and Wang discloses wherein the syntax tree further indicates one or more of: a filter condition (Ferrucci, Col. 1, lines 36-52, see the document/passage retrieval module 103 performs keywords search in a database, and performs document filtering and passage post-filtering in a document containing the keywords, so as to generate candidate answers); an aggregation condition; or a sorting condition. Claims 5 and 17 With respect to claims 5 and 17, the combination of Ferrucci and Wang discloses wherein the transmitting of the requests to the one or more electronic data endpoints and the one or more additional electronic data endpoints is based on the filter condition, the aggregation condition, or the sorting condition (Ferrucci, Col. 1, lines 36-52, see the document/passage retrieval module 103 performs keywords search in a database, and performs document filtering and passage post-filtering in a document containing the keywords, so as to generate candidate answers; Wang, Col. 27, lines 3-15, see, at740, the quer(ies) may be performed to at least one of the data set(s) to return a result to the natural language query via the interface of the natural language query processing system; and Wang, Fig. 7, step 740 “Perform quer(ies) to at least one of the data set(s) to return a result to the natural language query, the quer(ies) being generated in a query language using the entity prediction generated by the NER machine learning model”). Claim 11 With respect to claim 11, the combination of Ferrucci and Wang discloses further comprising storing an entry in a cache based on the natural language query and the data related to the natural language query (Ferrucci, Col. 6, lines 4-20, see, in step S305 for linked database retrieval, search is performed in different data sources such as linked data of DBpedia and IMDB based on the named entities detected in named entity detection step S301. Next, in candidate answer generation step S307, a candidate answer is generated based on a search result from linked database retrieval step S305. 10 15 25 30 FIG. 4 is a flow chart of the step of searching in a linked database and generating a candidate answer according to an embodiment of the present invention. As shown in FIG.4, first in matching step S401, a URI matching to a named entity is searched for in linked data [i.e., “cache”] based on similarity. For the above exemplary natural language question, based on the named entities “Robert Altman' and "Tim Robbins' detected in named entity detection step S301, matching URIs “<http:// dbpedia.org/resource/Robert Altmans' and “http://dbpe dia.org/resource/Tim Robbins” can be retrieved from DBpedia respectively). Claim 12 With respect to claim 12, the combination of Ferrucci and Wang discloses further comprising responding to a subsequent natural language query based on the entry in the cache without using the NER machine learning model to process the subsequent natural language query and without transmitting any requests to any data endpoints based on the subsequent natural language query (Ferrucci, Fig. 1, see Natural language questions inputted into Question processing module 10; Wang, Col. 27, lines 3-15, see, at740, the quer(ies) may be performed to at least one of the data set(s) to return a result to the natural language query via the interface of the natural language query processing system; and Wang, Fig. 7, step 740 “Perform quer(ies) to at least one of the data set(s) to return a result to the natural language query, the quer(ies) being generated in a query language using the entity prediction generated by the NER machine learning model”). Claim(s) 6, 7, 18 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ferrucci in view of Wang in further view of Shah et al., US 2015/0161521 A1 (hereinafter “Shah”). Claims 6 and 18 Claims 6 and 18 incorporate all of the limitations above. The combination of Ferrucci and Wang discloses associated with the one or more electronic data endpoints and the one or more additional electronic data endpoints (Wang, Col. 4, lines 27-48, see natural language query processing system 110 may implement natural language query processing pipeline 130, which may be used to process natural language query 140 in order to provide result 150. Natural language query processing pipeline 130 may include various different features, components, or stages, such as NER model 132, which may determine entities in natural language query 140, entity linking 134, which may link the entities predicted by NER model 132 to data sets 120 [i.e., data sets are the “endpoints”], data set selection 136 which may determine which ones data sets 120 should be used to satisfy entity linkages, and query generation 138 which may take the selected data sets, entities, and other information produced as part of natural language query processing pipeline 130 to generate quer(ies) in query language(s) (e.g., using various syntax, protocols, interfaces, or other parameters) in order to perform the quer(ies) to data sets 120. One example of a natural language query processing pipeline 130 is discussed in detail below with regard to FIGS. 2-6 , however the techniques described with respect to NER model 132 can be integrated into various other NL query processing pipelines or workflows, and thus the following example is not intended to be limiting). The combination of Ferrucci and Wang does not appear to explicitly disclose further comprising generating the requests based on request templates. Shah discloses further comprising generating the requests based on request templates (Shah, [0042], see generating candidate request templates by the expected usage domain). Ferrucci, Wang and Shah are analogous art because they are from the same problem-solving area of identifying entities. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, having the teachings of Ferrucci, Wang and Shah before him/her, to modify the NER entity identification of the combination of Ferrucci and Wang to include the templating of Shah because it would allow efficient creating of requests. The suggestion/motivation for doing so would have been to use populated candidate request templates for training, see Shah, [0008]. Therefore, it would have been obvious to combine Shah with the combination Ferrucci and Wang to obtain the invention as specified in the instant claim(s). Claims 7 and 19 Claims 7 and 19 incorporate all of the limitations above. The combination of Ferrucci and Wang discloses associated with the one or more electronic data endpoints and the one or more additional electronic endpoints (Wang, Col. 4, lines 27-48, see natural language query processing system 110 may implement natural language query processing pipeline 130, which may be used to process natural language query 140 in order to provide result 150. Natural language query processing pipeline 130 may include various different features, components, or stages, such as NER model 132, which may determine entities in natural language query 140, entity linking 134, which may link the entities predicted by NER model 132 to data sets 120 [i.e., data sets are the “endpoints”], data set selection 136 which may determine which ones data sets 120 should be used to satisfy entity linkages, and query generation 138 which may take the selected data sets, entities, and other information produced as part of natural language query processing pipeline 130 to generate quer(ies) in query language(s) (e.g., using various syntax, protocols, interfaces, or other parameters) in order to perform the quer(ies) to data sets 120. One example of a natural language query processing pipeline 130 is discussed in detail below with regard to FIGS. 2-6 , however the techniques described with respect to NER model 132 can be integrated into various other NL query processing pipelines or workflows, and thus the following example is not intended to be limiting). The combination of Ferrucci and Wang does not appear to explicitly disclose further comprising generating the requests in domain specific languages. Shah discloses further comprising generating the requests in domain specific languages (Shah, [0042], see generating candidate request templates by the expected usage domain). See claims 6 and 18 above for the motivation to combine. Claim(s) 9 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ferrucci in view of Wang in further view of Hunn et al., US 2024/0330605 A1 (hereinafter “Hunn”). Claim 9 Claim 9 incorporates all of the limitations above. The combination of Ferrucci and Wang discloses the one or more electronic data endpoints indicated in the natural language query (Wang, Col. 4, lines 27-48, see natural language query processing system 110 may implement natural language query processing pipeline 130, which may be used to process natural language query 140 in order to provide result 150. Natural language query processing pipeline 130 may include various different features, components, or stages, such as NER model 132, which may determine entities in natural language query 140, entity linking 134, which may link the entities predicted by NER model 132 to data sets 120 [i.e., data sets are the “endpoints”], data set selection 136 which may determine which ones data sets 120 should be used to satisfy entity linkages, and query generation 138 which may take the selected data sets, entities, and other information produced as part of natural language query processing pipeline 130 to generate quer(ies) in query language(s) (e.g., using various syntax, protocols, interfaces, or other parameters) in order to perform the quer(ies) to data sets 120. One example of a natural language query processing pipeline 130 is discussed in detail below with regard to FIGS. 2-6 , however the techniques described with respect to NER model 132 can be integrated into various other NL query processing pipelines or workflows, and thus the following example is not intended to be limiting). The combination of Ferrucci and Wang does not appear to explicitly disclose further comprising determining not to use a large language model (LLM) to process the natural language query based on determining that the NER machine learning model successfully identified. Hunn discloses further comprising determining not to use a large language model (LLM) to process the natural language query based on determining that the NER machine learning model successfully identified (Hunn, Fig. 14, see flowchart on how to train the machine learning model; and Hunn, [0187], see the NLG [i.e., “natural language generation model”, see Hunn, [0175]] is named entity recognition model). Ferrucci, Wang and Hunn are analogous art because they are from the same problem-solving area of identifying entities. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, having the teachings of Ferrucci, Wang and Hunn before him/her, to modify the NER entity identification of the combination of Ferrucci and Wang to include the large language model of Hunn because it would allow other ways to train the machine learning model. The suggestion/motivation for doing so would have been to manage and mine a collection of electronic documents for certain types of information, see Hunn, Abstract. Therefore, it would have been obvious to combine Hunn with the combination of Ferrucci and Wang to obtain the invention as specified in the instant claim(s). Claim 10 With respect to claim 10, the combination of Ferrucci, Wang and Hunn discloses wherein the LLM has a larger number of parameters than the NER machine learning model (Hunn, [0187], see the large language model (LLM) generating the natural language representation to describe the formal deviation. A LLM is a language model implemented as an artificial neural network 400 with a large number of parameters 434, typically on the order of billions of parameters 434. A LLM is a general purpose models which are useful for a wide range of AI/ML tasks, as opposed to being trained for a specific task such as sentiment analysis, named entity recognition [i.e., “NER”] or mathematical reasoning. …”). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. – Ting et al., 2024/0249072 for predictive generation of electronic query data; – Hertz et al., 2018/0082183 for machine learning-based relationship association and related discovery and search engines; – Hertz et al., 2019/0354544 for machine learning-based relationship association and related discovery and search engines; – Wang et al., 12265528 for natural language query processing; and – Kurz, DE 102013003055 for performing natural language searches. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Point of Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUBERT G CHEUNG whose telephone number is (571) 270-1396. The examiner can normally be reached M-R 8:00A-5:00P EST; alt. F 8:00A-4:00P EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Neveen Abel-Jalil can be reached at (571) 270-0474. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. HUBERT G. CHEUNG Assistant Examiner Art Unit 2152 Examiner: Hubert Cheung /Hubert Cheung/Assistant Examiner, Art Unit 2152Date: March 13, 2026 /NEVEEN ABEL JALIL/Supervisory Patent Examiner, Art Unit 2152
Read full office action

Prosecution Timeline

Dec 18, 2023
Application Filed
Nov 15, 2024
Non-Final Rejection — §103
Jan 16, 2025
Examiner Interview Summary
Jan 16, 2025
Applicant Interview (Telephonic)
Jan 29, 2025
Response Filed
Mar 26, 2025
Final Rejection — §103
Apr 21, 2025
Applicant Interview (Telephonic)
Apr 21, 2025
Examiner Interview Summary
May 20, 2025
Response after Non-Final Action
Jun 30, 2025
Request for Continued Examination
Jul 08, 2025
Response after Non-Final Action
Sep 02, 2025
Non-Final Rejection — §103
Dec 18, 2025
Examiner Interview Summary
Dec 18, 2025
Applicant Interview (Telephonic)
Dec 24, 2025
Response Filed
Mar 10, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596674
A SYSTEM FOR DATA ARCHIVAL IN A BLOCKCHAIN NETWORK AND A METHOD THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12591611
EFFICIENT ACCESS MARKING APPROACH FOR EFFICIENT RETRIEVAL OF DOCUMENT ACCESS DATA
2y 5m to grant Granted Mar 31, 2026
Patent 12585731
APPARATUS AND METHODS FOR DETERMINING A PROBABILITY DATUM
2y 5m to grant Granted Mar 24, 2026
Patent 12561306
SYSTEMS AND METHODS FOR OPTIMIZING DATA PROCESSING IN A DISTRIBUTED COMPUTING ENVIRONMENT
2y 5m to grant Granted Feb 24, 2026
Patent 12547594
SPATIAL-TEMPORAL STORAGE
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
63%
Grant Probability
99%
With Interview (+49.3%)
4y 6m
Median Time to Grant
High
PTA Risk
Based on 390 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month