Prosecution Insights
Last updated: April 19, 2026
Application No. 18/492,360

User's Attention Based Context Weighting And Selection For Prompting Large Generative AI Models

Non-Final OA §101§103§112
Filed
Oct 23, 2023
Examiner
ADESANYA, OLUJIMI A
Art Unit
2658
Tech Center
2600 — Communications
Assignee
Qualcomm Incorporated
OA Round
3 (Non-Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
3y 6m
To Grant
91%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
430 granted / 655 resolved
+3.6% vs TC avg
Strong +26% interview lift
Without
With
+25.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
35 currently pending
Career history
690
Total Applications
across all art units

Statute-Specific Performance

§101
19.3%
-20.7% vs TC avg
§103
40.6%
+0.6% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 655 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/12/26 has been entered. Response to Arguments Applicant's arguments filed 1/12/26 have been fully considered but they are not persuasive. Regarding the 35 U.S.C. 101 rejection of the claims, Applicant argues that the amendments to the claims ensures the claims are directed to statutory subject matter (Arguments, pg. 9). Examiner respectfully disagrees as the claims still recite data gathering and data analysis steps without significantly more as provided by the rejection below. Applicant’s arguments with respect to claims 1, 16 and 30 and as a result claims dependent therefrom, and references Park and Prasad not disclosing limitation “generate an enhanced prompt based on the user prompt and the subject matter to which the user is paying attention when or prior to receiving the user prompt by applying an adaptive important weighting to portions of the user prompt by applying an adaptive important weighting to portions of the user prompt based on the attention of the user to the subject matter when or prior to receiving the user prompt” (Arguments, pg. 10-12) have been considered but are moot in light of new grounds of rejection with reference Paek as presented in the rejection below Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-16 and 18-30 are rejected under 35 U.S.C. 101 because the claimed invention is directed to the abstract idea of prompt analysis without significantly more. The claims 1, 16 and 30 recite steps of: receive, from a user, a user prompt for a generative artificial intelligence model (LXM) (i.e., a data gathering step), determine an attention of the user to subject matter when or prior to receiving the user prompt (i.e., a data analysis step), generate an enhanced prompt based on the user prompt and the subject matter to which the user is paying attention when or prior to receiving the user prompt by weighting the portions of the user prompt based on user attention (i.e., a data analysis step), and submitting the enhanced prompt to the LXM (i.e., a data transmission/post solutional step), corresponding to steps achievable by a human in analyzing gathered data and context information and providing an output, and as such, the mental processes category of abstract ideas. This judicial exception is not integrated into a practical application because the claims are directed to an abstract idea with additional generic computer elements (computing device, LXM, memory, processor, processor readable-medium), where the generically recited computer elements do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because steps to: “generate an enhanced prompt based on the user prompt and the subject matter to which the user is paying attention when or prior to receiving the user prompt” and “submit the enhanced prompt to the LXM” correspond to the well-understood, routine, conventional computer functions of “collecting information, analyzing it, and displaying certain results of the collection and analysis“ and “receiving or transmitting data over a network” as recognized by the court decisions listed in MPEP § 2106.05, and as presented by cited references Park, Prasad and Yang (See PTO 892 form). The dependent claims 3-15 and 18-29 also recite mental processes and do not add significantly more than the abstract idea and are as such similarly rejected. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 3-16 and 18-30 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In particular, claims 1, 16 and 30 recite “generate an enhanced prompt based on the user prompt and the subject matter to which the user is paying attention when or prior to receiving the user prompt by applying an adaptive important weighting to portions of the user prompt based on the attention of the user to the subject matter when or prior to receiving the user prompt”. It is not clear how an enhanced prompt (that is based on the user prompt) is generated based on information received prior to receiving the user prompt that the enhanced prompt is dependent on. The claim is interpreted as claimed. The dependent claims are rejected based on their dependency Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 1. Claims 1, 5, 6, 9-14, 16-20, 23-28 and 30 rejected under 35 U.S.C. 103 as being unpatentable over Park et al US 2024/0420491 A1 (“Park”) in view of Paek ae al US 2024/0185856 A1 (“Paek”) Per claim 1, Park discloses a computing device, comprising: a memory (para. [0185]); and at least one processor coupled to the memory and configured (para. [0185]; para. [0222]-[0223]) to: receive, from a user, a user prompt for a generative artificial intelligence model (LXM) (fig. 1; FIG. 1 is a graphical representation of conventional large language model (LLM) operation. As shown, a user 102 provides an input prompt to a client device 104 …, para. [0038]; para. [0218]); determine an attention of the user to subject matter when or prior to receiving the user prompt (para. [0093]-[0094]; Looking at the menu, the user may follow up with a question to their virtual assistant about the number of calories of a food item (“how many calories is a taco?”)…. The reply may include a contextually relevant response, based on the location information of the user, with the calories of the food item on the restaurant's menu …, para. [0138]; para. [0207]; Consider, for example, a user that asked “What can I cook with this ingredient?” at the grocery store. They bought the ingredient and returned home. In the intervening time, their previous LLM session may have timed out. Here, the session management logic may reconstruct the previous conversation, so that when the user asks, “can I add this spice to the recipe?” the question is answered in the context of the same recipe that they were shown at the grocery store, para. [0292]; the session management logic may pre-emptively trigger an image capture of the user's gaze point and send LLM queries to e.g., prime the conversation state with information about the user's environment. These initial LLM queries may be performed before the user has said anything …, para. [0294], user looking/gazing at menu item prior to asking question and user paying attention to “this spice”/ingredient determined in grocery store prior to user question “can I add this spice to the recipe? at home as implying limitation); generate an enhanced prompt based on the user prompt and the subject matter to which the user is paying attention when or prior to receiving the user prompt (para. [0047]; para. [0060]; the smart glasses 402 may also gather contextual information about the user, their environment, and/or objects of interest, that may be useful to augment the user prompt. As but one such example, smart glasses 402 may use eye-tracking cameras and/or forward-facing cameras to obtain gaze information …, para. [0069]; para. [0137]; para. [0207]; para. [0219]; para. [0239]; para. [0292]-[0294]); and submit the enhanced prompt to the LXM (During online operation, the smart glasses 402 and smart phone 404 capture instantaneous user context …, para. [0060]; para. [0069]; para. [0075]-[0076]; For example, an LLM input specializer may be used to augment user context with additional input for an LLM. Functionally, an LLM input specializer augments the user's prompt in view of captured data ..., para. [0206]-[0207]; para. [0292]-[0294]). Park does not explicitly disclose to generate an enhanced prompt based on the user prompt and the subject matter to which the user is paying attention when or prior to receiving the user prompt by applying an adaptive important weighting to portions of the user prompt by applying an adaptive important weighting to portions of the user prompt based on the attention of the user to the subject matter when or prior to receiving the user prompt However, this feature is taught by Paek (fig. 9W; para. [0299]; para. [0317]-[0321]; For example, user gaze 902 may be weighted more heavily because user gaze 902 heavily indicates what word the user wishes to edit. Accordingly, target selector 830 may assign a high weight to the word “I” based on user gaze 902 …, para. [0322]) It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Paek with the device of Park in arriving at the missing features of Park, because such combination would have resulted in improving dictation services (Paek, para. [0002]; para. [0317]-[0322]). Per claim 5, Park in view of Paek discloses the computing device of claim 1, Park discloses wherein the at least one processor is further configured to generate the enhanced prompt by generating a summary prompt that includes words assigned greater weight based on the attention of the user to the subject matter when or prior to receiving the user prompt (para. [0047]; para. [0060]; para. [0219]; para. [0239]; para. [0250]; para. [0291). Per claim 6, Park in view of Paek d discloses the computing device of claim 1, Paek discloses wherein the at least one processor is further configured to apply the adaptive important weighting to portions of the user’s prompt based on the user’s attention to the subject matter when or prior to receipt of the user’s prompt by adding an attention bias weight to words in the user prompt based on observations of the attention of the user paid to words or phrases in the subject matter prior to entry of the user prompt (para. [0092]; para. [0229]; para. [0256]; para, [0263]). Per claim 9, Park in view of Paek discloses the computing device of claim 1, Park discloses wherein the at least one processor is further configured to generate the enhanced prompt based on the user prompt and the subject matter to which the user is paying attention by including in the enhanced prompt information regarding the subject matter to which the user paid attention when or prior to receiving the user prompt (During online operation, the smart glasses 402 and smart phone 404 capture instantaneous user context …, para. [0060]; para. [0068]-[0069]; para. [0138]; para. [0207]). Per claim 10, Park in view of Paek discloses the computing device of claim 9, Park discloses wherein the at least one processor is further configured to include in the enhanced prompt information regarding the subject matter to which the user paid attention when or prior to receiving the user prompt by: generating text describing a portion of the subject matter to which the user paid attention when or prior to receiving the user prompt (para. [0060]; in the context of a large language model (LLM) the different modalities of information may first be converted to a common comparison domain (text) …, para. [0065]; para. [0068]-[0069]; para. [0138]; para. [0207]); and including at least a portion of the generated text in the enhanced prompt (para. [0065]; para. [0068]-[0069]). Per claim 11, Park in view of Paek discloses the computing device of claim 10, Park discloses wherein the at least one processor is further configured to include in the enhanced prompt information regarding the subject matter to which the user paid attention when or prior to receiving the user prompt by: generating text summarizing the subject matter to which the user paid attention when or prior to receiving the user prompt (para. [0060]; in the context of a large language model (LLM) the different modalities of information may first be converted to a common comparison domain (text) …, para. [0065]; para. [0068]-[0069]; para. [0138]; para. [0207]); and including at least a portion of the generated text in the enhanced prompt (para. [0060]; in the context of a large language model (LLM) the different modalities of information may first be converted to a common comparison domain (text) …, para. [0065]; para. [0068]-[0069]; para. [0206]). Per claim 12, Park in view of Paek discloses the computing device of claim 1, Park discloses wherein the at least one processor is further configured to determine the user attention to subject matter when or prior to receiving the user prompt by determining the user’s attention to subject matter associated with the computing device when or prior to receiving the user’s prompt (During online operation, the smart glasses 402 and smart phone 404 capture instantaneous user context …, para. [0060]; para. [0068]-[0069]; para. [0138]; para. [0207]). Per claim 13, Park in view of Paek discloses the computing device of claim 1, Park discloses wherein the at least one processor is further configured to determine the attention of the user to subject matter when or prior to receiving the user prompt by determining the user’s attention to subject matter associated with another nearby device when or prior to receiving the user prompt (During online operation, the smart glasses 402 and smart phone 404 capture instantaneous user context …, para. [0060]; para. [0068]-[0069]; para. [0138]; para. [0207]). Per claim 14, Park in view of Paek discloses the computing device of claim 1, Park discloses wherein the LXM is a large language model (LLM) (para. [0053]). Per claim 16, Park discloses a method performed by a computing device for generating a prompt for a generative artificial intelligence model (LXM), comprising: receiving a user prompt for the LXM (fig. 1; FIG. 1 is a graphical representation of conventional large language model (LLM) operation. As shown, a user 102 provides an input prompt to a client device 104 …, para. [0038]; para. [0218]); determine an attention of the user to subject matter when or prior to receiving the user prompt (para. [0093]-[0094]; Looking at the menu, the user may follow up with a question to their virtual assistant about the number of calories of a food item (“how many calories is a taco?”)…. The reply may include a contextually relevant response, based on the location information of the user, with the calories of the food item on the restaurant's menu …, para. [0138]; para. [0207]; Consider, for example, a user that asked “What can I cook with this ingredient?” at the grocery store. They bought the ingredient and returned home. In the intervening time, their previous LLM session may have timed out. Here, the session management logic may reconstruct the previous conversation, so that when the user asks, “can I add this spice to the recipe?” the question is answered in the context of the same recipe that they were shown at the grocery store, para. [0292]; the session management logic may pre-emptively trigger an image capture of the user's gaze point and send LLM queries to e.g., prime the conversation state with information about the user's environment. These initial LLM queries may be performed before the user has said anything …, para. [0294], user looking/gazing at menu item prior to asking question and user paying attention to “this spice”/ingredient determined in grocery store prior to user question “can I add this spice to the recipe? at home as implying limitation); generating an enhanced prompt based on the user’s prompt and the subject matter to which the user is paying attention when or prior to receiving the user prompt (para. [0060]; the smart glasses 402 may also gather contextual information about the user, their environment, and/or objects of interest, that may be useful to augment the user prompt. As but one such example, smart glasses 402 may use eye-tracking cameras and/or forward-facing cameras to obtain gaze information …, para. [0069]; para. [0137]; para. [0207]; para. [0292]-[0294]); and submitting the enhanced prompt to the LXM (During online operation, the smart glasses 402 and smart phone 404 capture instantaneous user context …, para. [0060]; para. [0069]; para. [0075]-[0076]; For example, an LLM input specializer may be used to augment user context with additional input for an LLM. Functionally, an LLM input specializer augments the user's prompt in view of captured data ..., para. [0206]-[0207]; para. [0292]-[0294]) Park does not explicitly disclose to generate an enhanced prompt based on the user prompt and the subject matter to which the user is paying attention when or prior to receiving the user prompt by applying an adaptive important weighting to portions of the user prompt by applying an adaptive important weighting to portions of the user prompt based on the attention of the user to the subject matter when or prior to receiving the user prompt However, this feature is taught by Paek (fig. 9W; para. [0299]; para. [0317]-[0321]; For example, user gaze 902 may be weighted more heavily because user gaze 902 heavily indicates what word the user wishes to edit. Accordingly, target selector 830 may assign a high weight to the word “I” based on user gaze 902 …, para. [0322]) It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Paek with the method of Park in arriving at the missing features of Park, because such combination would have resulted in improving dictation services (Paek, para. [0002]; para. [0317]-[0322]) Per claim 18, Park in view of Paek discloses the method of claim 16, Park discloses wherein determining the attention of the user to the subject matter comprises one or more of tracking an eye gaze of the user on the subject matter, tracking a mouse cursor location on the subject matter, or tracking locations of touch input from the user on a touch sensitive display (para. [0223]; para. [0250]). Per claim 19, Park in view of Paek discloses the method of claim 16, Park discloses wherein generating the enhanced prompt comprises generating a summary prompt that includes words assigned greater weight based on the attention of the user to the subject matter a when or prior to receiving the user prompt (para. [0047]; para. [0060]; para. [0219]; para. [0239]; para. [0250]; para. [0291). Per claim 20, Park in view of Paek discloses the method of claim 16, Paek discloses wherein the adaptive important weighting is applied to portions of the user’s prompt based on the user’s attention to the subject matter when or prior to receipt of the user’s prompt by adding an attention bias weight to words in the user prompt based on observations of the attention of the user paid to words or phrases in the subject matter prior to entry of the user prompt (para. [0092]; para. [0229]; para. [0256]; para, [0263]). Per claim 23, Park in view of Paek discloses the method of claim 16, Park discloses wherein generating the enhanced prompt based on the user prompt and the subject matter to which the user is paying attention comprises including in the enhanced prompt information regarding the subject matter to which the user is paying attention when or prior to receiving the user prompt (During online operation, the smart glasses 402 and smart phone 404 capture instantaneous user context …, para. [0060]; para. [0068]-[0069]; para. [0292]-[0294]). Per claim 24, Park in view of Paek discloses the method of claim 23, Park discloses wherein including in the enhanced prompt information, information regarding the subject matter to which the user is paying attention when or prior to receiving the user prompt comprises: generating text describing a portion of the subject matter to which the user is paying attention when or prior to receiving the user prompt (para. [0060]; in the context of a large language model (LLM) the different modalities of information may first be converted to a common comparison domain (text) …, para. [0065]; para. [0068]-[0069]; para. [0292]-[0294]); and including at least a portion of the generated text in the enhanced prompt (para. [0060]; in the context of a large language model (LLM) the different modalities of information may first be converted to a common comparison domain (text) …, para. [0065]; para. [0068]-[0069]; para. [0206]; para. [0292]-[0294]); Per claim 25, Park in view of Paek discloses the method of claim 23, Park discloses wherein including in the enhanced prompt information regarding the subject matter to which the user is paying attention when or prior to receiving the user prompt comprises: generating text summarizing the subject matter to which the user is paying attention when or prior to receiving the user prompt (para. [0060]; in the context of a large language model (LLM) the different modalities of information may first be converted to a common comparison domain (text) …, para. [0065]; para. [0068]-[0069]; para. [0138]; para. [0207]; para. [0292]-[0294]); and including at least a portion of the generated text in the enhanced prompt (para. [0060]; in the context of a large language model (LLM) the different modalities of information may first be converted to a common comparison domain (text) …, para. [0065]; para. [0068]-[0069]; para. [0206]; para. [0292]-[0294]). Per claim 26, Park in view of Paek discloses the method of claim 16, Park discloses wherein determining the attention of the user to the subject matter when or prior to receiving the user prompt comprises determining the user’s attention to subject matter associated with the computing device when or prior to receiving the user prompt (During online operation, the smart glasses 402 and smart phone 404 capture instantaneous user context …, para. [0060]; para. [0068]-[0069]; para. [0138]; para. [0207]; para. [0292]; para. [0294]). Per claim 27, Park in view of Paek discloses the method of claim 16, Park discloses wherein determining the user attention to the subject matter when or prior to receiving the user prompt comprises determining the user’s attention to subject matter associated with another nearby device when or prior to receiving the user prompt (During online operation, the smart glasses 402 and smart phone 404 capture instantaneous user context …, para. [0060]; para. [0068]-[0069]; para. [0138]; para. [0207]; para. [0292]; para. [0294]). Per claim 28, Park in view of Paek discloses the method of claim 16, Park discloses wherein: receiving the user prompt for the LXM comprises receiving the user prompt for a large language model (LLM) (fig. 1; FIG. 1 is a graphical representation of conventional large language model (LLM) operation. As shown, a user 102 provides an input prompt to a client device 104 …, para. [0038]; para. [0218]); and submitting the enhanced prompt to the LXM comprises submitting the enhanced prompt to the LLM (During online operation, the smart glasses 402 and smart phone 404 capture instantaneous user context …, para. [0060]; para. [0069]; para. [0075]-[0076]; For example, an LLM input specializer may be used to augment user context with additional input for an LLM. Functionally, an LLM input specializer augments the user's prompt in view of captured data ..., para. [0206]; para. [0292]; para. [0294]). Per claim 30, Park discloses a computing device, comprising: means for receiving a user’s prompt for a generative artificial intelligence model (LXM) (fig. 1; FIG. 1 is a graphical representation of conventional large language model (LLM) operation. As shown, a user 102 provides an input prompt to a client device 104 …, para. [0038]; para. [0185]; para. [0218]; para. [0222]-[0223]); means for determining an attention of the user to subject matter when or prior to receiving the user prompt (para. [0093]-[0094]; Looking at the menu, the user may follow up with a question to their virtual assistant about the number of calories of a food item (“how many calories is a taco?”)…. The reply may include a contextually relevant response, based on the location information of the user, with the calories of the food item on the restaurant's menu …, para. [0138]; para. [0207]; Consider, for example, a user that asked “What can I cook with this ingredient?” at the grocery store. They bought the ingredient and returned home. In the intervening time, their previous LLM session may have timed out. Here, the session management logic may reconstruct the previous conversation, so that when the user asks, “can I add this spice to the recipe?” the question is answered in the context of the same recipe that they were shown at the grocery store, para. [0292]; the session management logic may pre-emptively trigger an image capture of the user's gaze point and send LLM queries to e.g., prime the conversation state with information about the user's environment. These initial LLM queries may be performed before the user has said anything …, para. [0294], user looking/gazing at menu item prior to asking question and user paying attention to “this spice”/ingredient determined in grocery store prior to user question “can I add this spice to the recipe? at home as implying limitation); means for generating an enhanced prompt based on the user’s prompt and the subject matter to which the user is paying attention when or prior to receiving the user prompt (para. [0060]; the smart glasses 402 may also gather contextual information about the user, their environment, and/or objects of interest, that may be useful to augment the user prompt. As but one such example, smart glasses 402 may use eye-tracking cameras and/or forward-facing cameras to obtain gaze information …, para. [0069]; para. [0137]; para. [0207]; para. [0292]-[0294]); and means for submitting the enhanced prompt to the LXM (During online operation, the smart glasses 402 and smart phone 404 capture instantaneous user context …, para. [0060]; para. [0069]; para. [0075]-[0076]; For example, an LLM input specializer may be used to augment user context with additional input for an LLM. Functionally, an LLM input specializer augments the user's prompt in view of captured data ..., para. [0206]-[0207]; para. [0218]; para. [0222]-[0223]; para. [0292]-[0294]) Park does not explicitly disclose to generate an enhanced prompt based on the user prompt and the subject matter to which the user is paying attention when or prior to receiving the user prompt by applying an adaptive important weighting to portions of the user prompt by applying an adaptive important weighting to portions of the user prompt based on the attention of the user to the subject matter when or prior to receiving the user prompt However, this feature is taught by Paek (fig. 9W; para. [0299]; para. [0317]-[0321]; For example, user gaze 902 may be weighted more heavily because user gaze 902 heavily indicates what word the user wishes to edit. Accordingly, target selector 830 may assign a high weight to the word “I” based on user gaze 902 …, para. [0322]) It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Paek with the device of Park in arriving at the missing features of Park, because such combination would have resulted in improving dictation services (Paek, para. [0002]; para. [0317]-[0322]). 2. Claims 3, 4, 7, 15, 21 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Park in view of Paek as applied to claims 1 and 16 above, and further in view of Prasad et al US 2025/0004544 A1 (“Prasad”) Per claim 3, Park in view of Paek discloses the computing device of claim 1, Park discloses a user camera, wherein the at least one processor is further configured to determine the attention of the user to the subject matter by tracking an eye gaze of the user on the subject matter (para. [0223]; para. [0250]) Park does not explicitly disclose the use of a user facing camera However, this feature is taught by Prasad (fig. 1B, element 118) It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Prasad with the device of Park in arriving at the missing features of Park, because such combination would have resulted in determining whether a front facing user is facing an audio capture device for the purposes of determining whether user speech/input is system-directed (Prasad, para. [0127]) Per claim 4, Park in view of Paek discloses the computing device of claim 1, Park discloses further comprising a touch sensitive display (para. [0223]), Park does not explicitly disclose wherein the at least one processor is further configured to determine the attention of the user to the subject matter by tracking locations of touch input from the user on the touch sensitive display However, this feature is taught by Prasad (para. [0054]) It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Prasad with the device of Park in arriving at the missing features of Park, because such combination would have resulted in determining what content is more of interest to a user (Prasad, para. [0054]; para. [0064]) Per claim 7, Park in view of Paek and Prasad discloses the computing device of claim 6, Park does not explicitly disclose wherein the at least one processor is further configured to increase the attention bias weight responsive to a duration the user focused on particular words and decreasing the attention bias weight with time after the user focus shifts away from the particular words However, this feature is suggested by Prasad disclosing changing the attention coloring of a gazed area of text/words from green to yellow due to the time between gaze instances, to respectively indicate a detected maintained gaze or a gaze shift away from the text/words (A gaze event is one where the system 100 has determined that the user has actually looked at a particular location/region of the display 102 for a sufficient period of time to consider it an actual gaze …, para. [0034]; In response to the gaze event meeting the initial threshold the gaze manager 150/device manager 160 may control the display 102 to present a first visual indicator 322. For example, the first visual indicator 322 may correspond to a colored border surrounding the first GUI element 104. The border may change color or otherwise animate (e.g., through flashing, pulsing, or other animation) to indicate an active gaze. Such color of the border may also change as different gaze thresholds are met (for example starting at yellow and progressing to green or the like)…. The first visual indicator 322 and second visual indicator 324 may also animate or change to indicate a gaze away from the display 102 (such as a color change from green to yellow, para. [0051]) It would have been obvious to one of ordinary skill in the art to try to implement wherein the at least one processor is further configured to increase the attention bias weight responsive to a duration the user focused on particular words and decreasing the attention bias weight with time after the user focus shifts away from the particular words, because such implementation would have resulted in indicating an active gaze. Per claim 15, Park in view of Paek discloses the computing device of claim 1, Park discloses a display coupled to the at least one processor (para. [0222]-[0223]), Paek discloses wherein the at least one processor is configured to: determine attention of the user to the subject matter when or prior to receiving the user prompt by determining the user’s attention to subject matter presented on the display when or prior to receiving the user’s prompt (para. [0317]-[0322]) Park does not explicitly disclose generate the enhanced prompt based on the user prompt and the subject matter to which the user is paying attention when or prior to receiving the user prompt by generating the enhanced prompt based on the user prompt and subject matter presented on the display to which the user is paying attention when or prior to receiving the user prompt However, this feature is taught by Prasad: generate the enhanced prompt based on the user prompt and the subject matter to which the user is paying attention when or prior to receiving the user prompt by generating the enhanced prompt based on the user prompt and subject matter presented on the display to which the user is paying attention when or prior to receiving the user prompt (fig. 1B; para. [0056]; para. [0074]) It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Prasad with the device of Park in arriving at the missing features of Park, because such combination would have resulted in determining what content is more of interest to a user (Prasad, para. [0054]; para. [0064]) Per claim 21, Park in view of Paek discloses the method of claim 20, Park does not explicitly disclose increasing the attention bias weight responsive to a duration the user focused on particular words and decreasing the attention bias weight with time after the user focus shifts away from the particular words However, this feature is suggested by Prasad disclosing changing the attention coloring of a gazed area of text/words from green to yellow due to the time between gaze instances, to respectively indicate a detected maintained gaze or a gaze shift away from the text/words (A gaze event is one where the system 100 has determined that the user has actually looked at a particular location/region of the display 102 for a sufficient period of time to consider it an actual gaze …, para. [0034]; In response to the gaze event meeting the initial threshold the gaze manager 150/device manager 160 may control the display 102 to present a first visual indicator 322. For example, the first visual indicator 322 may correspond to a colored border surrounding the first GUI element 104. The border may change color or otherwise animate (e.g., through flashing, pulsing, or other animation) to indicate an active gaze. Such color of the border may also change as different gaze thresholds are met (for example starting at yellow and progressing to green or the like)…. The first visual indicator 322 and second visual indicator 324 may also animate or change to indicate a gaze away from the display 102 (such as a color change from green to yellow, para. [0051]) It would have been obvious to one of ordinary skill in the art to try to implement wherein the at least one processor is further configured to increase the attention bias weight responsive to a duration the user focused on particular words and decreasing the attention bias weight with time after the user focus shifts away from the particular words, because such implementation would have resulted in indicating an active gaze. Per claim 29, Park in view of Paek discloses the method of claim 16, Paek discloses wherein: determining the user’s attention to the subject matter when or prior to receiving the user prompt comprises determining the attention of the user to subject matter presented on a display of the computing device when or prior to receiving the user prompt (para. [0317]-[0322]) Park does not explicitly disclose generating the enhanced prompt based on the user prompt and the subject matter to which the user is paying attention a when or prior to receiving the user prompt comprises generating the enhanced prompt based on the user prompt and subject matter presented on the display to which the user is paying attention when or prior to receiving the user’s prompt However, this feature is taught by Prasad: generating the enhanced prompt based on the user prompt and the subject matter to which the user is paying attention at the time or prior to receipt of the user prompt comprises generating the enhanced prompt based on the user prompt and subject matter presented on the display to which the user is paying attention at the time or prior to receipt of the user prompt (fig. 1B; para. [0056]; para. [0074]) It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Prasad with the method of Park in arriving at the missing features of Park, because such combination would have resulted in determining what content is more of interest to a user (Prasad, para. [0054]; para. [0064]). Allowable Subject Matter Claims 8 and 22 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO 892 form. Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLUJIMI A ADESANYA whose telephone number is (571)270-3307. The examiner can normally be reached Monday-Friday 8:30-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at 571-272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OLUJIMI A ADESANYA/Primary Examiner, Art Unit 2658
Read full office action

Prosecution Timeline

Oct 23, 2023
Application Filed
Jun 26, 2025
Non-Final Rejection — §101, §103, §112
Sep 02, 2025
Interview Requested
Sep 26, 2025
Interview Requested
Sep 29, 2025
Response Filed
Oct 02, 2025
Examiner Interview Summary
Oct 02, 2025
Applicant Interview (Telephonic)
Oct 12, 2025
Final Rejection — §101, §103, §112
Dec 16, 2025
Response after Non-Final Action
Jan 12, 2026
Request for Continued Examination
Jan 14, 2026
Response after Non-Final Action
Jan 24, 2026
Non-Final Rejection — §101, §103, §112
Mar 24, 2026
Interview Requested
Mar 30, 2026
Examiner Interview Summary
Mar 30, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591739
METHOD AND SYSTEM FOR DIACRITIZING ARABIC TEXT
2y 5m to grant Granted Mar 31, 2026
Patent 12585686
EVENT DETECTION AND CLASSIFICATION METHOD, APPARATUS, AND DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12585481
METHOD AND ELECTRONIC DEVICE FOR PERFORMING TRANSLATION
2y 5m to grant Granted Mar 24, 2026
Patent 12578779
Multiple Stage Network Microphone Device with Reduced Power Consumption and Processing Load
2y 5m to grant Granted Mar 17, 2026
Patent 12579181
Synchronization of Sensor Network with Organization Ontology Hierarchy
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
91%
With Interview (+25.5%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 655 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month