Prosecution Insights
Last updated: April 19, 2026
Application No. 18/376,101

COMPUTER IMPLEMENTED METHOD AND SYSTEM

Final Rejection §103
Filed
Oct 03, 2023
Examiner
ADESANYA, OLUJIMI A
Art Unit
2658
Tech Center
2600 — Communications
Assignee
Bubblr Inc.
OA Round
2 (Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
3y 6m
To Grant
91%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
430 granted / 655 resolved
+3.6% vs TC avg
Strong +26% interview lift
Without
With
+25.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
35 currently pending
Career history
690
Total Applications
across all art units

Statute-Specific Performance

§101
19.3%
-20.7% vs TC avg
§103
40.6%
+0.6% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 655 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claim 1 and the cited portions of Socher and the other references Saikia, Spiegel and Khosla failing to disclose limitation “generating a prompt suitable for a large language model (LLM) by processing the query using a trained model that is implemented using an application of a convolution neural network (CNN)" (Arguments, pg. 15, second para. – pg. 17) have been considered but are moot in light of new grounds of rejection with reference Nguyen as provided below. Response to Amendment The prior 365 U.S.C. 101 rejection of the claims (6/17/25) is withdrawn in light of amendments to the independent claim. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 1. Claims 1-3, 5, 7-13, 15-17 and 20-22are rejected under 35 U.S.C. 103 as being unpatentable over Socher et al US 2024/0020538 A1 (“Socher”) in view of Nguyen et al US 2025/0005293 A1 (“Nguyen”) and Saikia et al US 2025/0094735 A1 (“Saikia”) Per Claim 1, Socher discloses a computer-implemented method of providing a response to a query from a client device, the method implemented by a processing resource, the method comprising: providing, via a cloud, a hardware platform configured to receive a query from a client device (para. [0046]; para. [0053]-[0056]; para. [0069]-[0070]); receiving the query from a client device via a user interface, through an application programming interface (API) (para. [0054]-[0056]); generating a prompt suitable for a large language model (LLM) by processing the query (fig. 2; NL preprocessing submodule 231 may tokenize and generate a search query based on received input … The LLM interface submodule 234 may interface with LLMs external to text generation module 230, prepare inputs for APIs associated with external LLMs, prepare prompts based on search results received by search submodule 232, prepare prompts based on inputs processed by NL preprocessing submodule 231 … para. [0056]); determining, using the prompt, whether the query relates to contemporaneous data (fig. 1B: fig. 2; para. [0020]; In view of the need for improved generative AI systems that reflect on time-varying information such as real-time news events, embodiments described herein provide systems and methods for a real-time search based generative AI platform. Specifically, in response to a user input to perform an NLP task, the generative AI platform may trigger a real-time search based on the user input …, para. [0021]; the text generation server 110 may determine whether a real-time search is needed …, para. [0028]; FIG. 2 is a simplified diagram illustrating a computing device 200 implementing the text generation server 110 …, para. [0050]; para. [0056]); and based on the determination, providing a response including a contemporaneous component (In this way, the generative AI system may generate a text output based on real-time search results that reflect most-up-to-date information according to a user query.…, para. [0025]; para. [0028]) Socher does not explicitly disclose generating a prompt suitable for a large language model (LLM) by processing the query using a trained model that is implemented using an application of a convolutional neural network (CNN) However, this feature is taught by Nguyen: generating a prompt suitable for a large language model (LLM) by processing the query using a trained model that is implemented using an application of a convolutional neural network (CNN) (para. [0002]; para. [0030]; the VLM or another machine learning model upstream of the VLM (e.g., a convolutional neural network (CNN) and/or image encoder) may be implemented at least in part on an edge device …This derived data may then be assembled into an LLM prompt with other data, such as an individual's prompt to his or her automated assistant …, para. [0024]; para. [0057]) Socher in view of Nguyen does not explicitly disclose providing a response utilizing a low randomness request However, this feature is taught by Saikia (iv) temperature: Temperature may be used by the GenAI Model 505 to govern a randomness and thus a creativity of the responses. The temperature may be a number between zero and one.…, para. [0152]; zero temperature as implying low randomness request) It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Nguyen with the method of Socher in arriving at the missing features of Socher, as well as to combine the teachings of Saikia with the method of Socher in view of Nguyen in arriving at the missing features of Socher in view of Nguyen, because such combination would have resulted in disambiguating and providing better responses to NL inputs (Nguyen, para. [0025]), and in identifying outputs/responses associated with errors (Saikia, para. [0038]). Per Claim 2, Socher in view of Nguyen and Saikia discloses a method according to Claim 1, Socher discloses: wherein providing a response including a contemporaneous component comprises: if the query relates to contemporaneous data, determining whether the large language model is configured to provide a contemporaneous component (para. [0028]; para. [0049]; para. [0109]); and if the large language model is configured to provide a contemporaneous component, providing a response based on output from the large language model, wherein the response comprises the contemporaneous component (para. [0021]; para. [0025]; In one embodiment, the text generation server 110 may select a LLM from candidate LLMs 116a-n for forwarding the NLP request depending on a type of the NLP task. For example, a LLM 116a may be used for composing an article, while another LLM 116b may be selected for generating a system response in a conversation. The text generation server 110 may then generate NL output 125 based on the outputs from the LLMs and return to the user device 120 for display …, para. [0049]); and if the large language model is not configured to provide a contemporaneous component, obtaining the contemporaneous component and providing a response including the obtained contemporaneous component (For example, a LLM 116a may be used for composing an article, while another LLM 116b may be selected for generating a system response in a conversation …, para. [0049]). Per Claim 3, Socher in view of Nguyen and Saikia discloses a method according to Claim 1, Socher discloses wherein processing the query to generate a prompt for a large language model comprises an application of natural language processing or machine learning techniques to determine the prompt for the large language model (The LLM interface submodule 234 may interface with LLMs external to text generation module 230, prepare inputs for APIs associated with external LLMs, prepare prompts based on search results received by search submodule 232, prepare prompts based on inputs processed by NL preprocessing submodule 231 …, para. [0056]). Per Claim 5, Socher in view of Nguyen and Saikia discloses a method according to Claim 1 Saikia discloses wherein the low randomness request is a zero-temperature request to the large language model (para. [0152]) Per Claim 7, Socher in view of Nguyen and Saikia discloses a method according to Claim 1, Socher discloses wherein the trained model is trained to determine a presence of terms which identify contemporaneity in the query (para. [0049]). Per Claim 8, Socher in view of Nguyen and Saikia discloses a method according to Claim 2, Socher discloses: wherein providing the response including the contemporaneous component comprises: updating the large language model to include the contemporaneous component (para. [0141]); and obtaining a response from the large language model including the contemporaneous component (para. [0141]). Per Claim 9, Socher in view of Nguyen and Saikia discloses a method according to Claim 8, Socher discloses wherein the method further comprises obtaining a further response component and determining whether the further response component comprises further contemporaneous components (para. [0141]-[0143]). Per Claim 10, Socher in view of Nguyen and Saikia discloses a method according to Claim 1, Saikia discloses wherein, prior to providing the response including the contemporaneous component, applying a validation process to the response (para. [0038]) Per Claim 11, Socher in view of Nguyen and Saikia discloses a method according to Claim 10, Saikia discloses wherein the validation process is based on a temperature setting in the large language model (para. [0038]; para. [0152]). Per Claim 12, Socher in view of Nguyen and Saikia discloses a method according to Claim 1, Socher discloses wherein the query comprises an image (para. [0038]). Per Claim 13, Socher in view of Nguyen and Saikia discloses a method according to Claim 12, Socher discloses wherein the query is pre-processed to extract content from the image (para. [0038]; para. [0060]). Per Claim 14, Socher in view of Nguyen and Saikia discloses a method according to Claim 13, Nguyen discloses wherein the pre-processing comprises an application of the convolutional neural network to the image (para. [0024]). Per Claim 15, Socher in view of Nguyen and Saikia discloses a method according to Claim 1, Socher discloses wherein the query comprises an audio component (para. [0038]). Per Claim 16, Socher in view of Nguyen and Saikia discloses a method according to Claim 15, Socher discloses wherein the query is pre-processed to extract content from the audio component (para. [0038]; para. [0060]). Per Claim 17, Socher in view of Nguyen and Saikia discloses a method according to Claim 16, Socher discloses wherein the pre-processing comprises an application of digital signal processing techniques (para. [0038]; para. [0050]; para. [0060]). Per Claim 20, Socher in view of Nguyen and Saikia discloses a system configured to implement the method of Claim 1 (Socher, Abstract; fig. 6). Per Claim 21, Socher in view of Nguyen and Saikia discloses a non-transitory computer readable medium which comprises instructions which, when executed by a processing medium, configures the processing medium to implement the method of Claim 1 (Socher, para. [0089]; para. [0152]). Per Claim 22, Socher in view of Nguyen and Saikia discloses a computing device configured to implement the method of Claim 1 (Socher, Abstract; para. [0021]). 2. Claims 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Socher in view of Nguyen and Saikia as applied to claim 1 above, and further in view of Khosla et al US 2025/0005051 A1 (“Khosla”) Per Claim 18, Socher in view of Nguyen and Saikia discloses a method according to Claim 1 Socher does not explicitly disclose wherein the prompt and query are re-processed to provide an updated response to the query However, this feature is taught by Khosla (The aggregator may analyze the natural language question and determine what search systems to retrieve passages and QA pairs from, to answer that question. The aggregator may retrieve those passages (and related QA pairs) and use them, along with the question, to formulate a prompt.…, para. [0012]; the natural language question answer service can utilize a trained large language model (LLM) to use the prompt and generate an answer to the natural language question. The LLM may be a trained machine learning model utilizing Retrieval Augmented Generation (RAG) techniques to generate answers using semantics (e.g., in addition to or alternatively to lexical techniques) to answer the question …, [0013]; para. [0029], answer received after retrieved passages as updated response) It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Khosla with the method of Socher in view of Nguyen, Saikia in arriving at the missing features of Socher, because such combination would have resulted in generating accurate answers/responses to user queries (Khosla, para. [0010]-[0011]) Per Claim 19, Socher in view of Nguyen, Saikia and Khosla discloses a method according to Claim 18, Khosla discloses wherein the re-processing utilises a second large language model distinct from the large language model (para. [0010]; para. [0012]-[0013]; the aggregator component 104 may be implemented through use of machine learning models (e.g., generative AI models, etc.) …, para. [0029]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO 892 form. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLUJIMI A ADESANYA whose telephone number is (571)270-3307. The examiner can normally be reached Monday-Friday 8:30-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at 571-272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OLUJIMI A ADESANYA/Primary Examiner, Art Unit 2658
Read full office action

Prosecution Timeline

Oct 03, 2023
Application Filed
Jun 13, 2025
Non-Final Rejection — §103
Dec 17, 2025
Response Filed
Feb 24, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591739
METHOD AND SYSTEM FOR DIACRITIZING ARABIC TEXT
2y 5m to grant Granted Mar 31, 2026
Patent 12585686
EVENT DETECTION AND CLASSIFICATION METHOD, APPARATUS, AND DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12585481
METHOD AND ELECTRONIC DEVICE FOR PERFORMING TRANSLATION
2y 5m to grant Granted Mar 24, 2026
Patent 12578779
Multiple Stage Network Microphone Device with Reduced Power Consumption and Processing Load
2y 5m to grant Granted Mar 17, 2026
Patent 12579181
Synchronization of Sensor Network with Organization Ontology Hierarchy
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
91%
With Interview (+25.5%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 655 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month