Prosecution Insights
Last updated: April 19, 2026
Application No. 18/657,557

PROVIDING CONTEXTUAL EDUCATIONAL CONTENT FOR A CONTENT RECEIVER USING ARTIFICIAL INTELLIGENCE

Non-Final OA §101§102§103§112
Filed
May 07, 2024
Examiner
BIANCAMANO, ALYSSA N
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
DISH NETWORK L.L.C.
OA Round
1 (Non-Final)
56%
Grant Probability
Moderate
1-2
OA Rounds
3y 3m
To Grant
94%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
90 granted / 161 resolved
-14.1% vs TC avg
Strong +38% interview lift
Without
With
+38.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
46 currently pending
Career history
207
Total Applications
across all art units

Statute-Specific Performance

§101
15.9%
-24.1% vs TC avg
§103
33.3%
-6.7% vs TC avg
§102
14.1%
-25.9% vs TC avg
§112
33.1%
-6.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 161 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 2-7, 11, 14-15, and 17-19 are objected to because of the following informalities: “wherein determining” recited in claim 2, ln. 1, claim 3, ln. 1, and claim 4, ln. 1 should likely read “wherein the determining” to avoid claim ambiguity; “wherein training” recited in claim 5, ln. 1 should likely read “wherein the training” to avoid claim ambiguity; “the displayed instructional content’ recited in claim 6, ln. 3 should likely read “the displayed educational content” for consistency purposes and to avoid claim ambiguity; “wherein obtaining” recited in claim 7, ln. 1 should likely read “wherein the obtaining” to avoid claim ambiguity; “a content receiver” recited in claim 11, ln. 4 and claim 17, ln. 4 should likely read “[[a]]the content receiver”; “in response to educational content displayed” recited in claim 14, ln. 2 should likely read “in response to the educational content displayed”; “wherein causing” recited in claim 15, ln. 1 should likely read “wherein the causing”; “determine to provide educational content” recited in claim 18, ln. 1-2 should likely read “determine to provide the educational content”; and “the user” recited in claim 19, ln. 3 should likely read “[[the]]a user”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claims contain subject matter which was not described in the Specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 recites in part “obtaining contextual information associated with content currently displayed to a user using a content receiver”. However, the Specification fails to disclose how the contextual information is obtained (see, e.g., Specification, [0081], noting that the contextual information may include an indication of content being displayed by the content receiver, but failing to describe how said information is obtained). Accordingly, the claim is rejected for a lack of written description, as the Specification fails to provide details regarding how this function is accomplished. Independent claims 16 and 19 are rejected for similar reasoning (see claim 16, ln. 5 & claim 19, ln. 3-4). Claim 16 is further similarly rejected due to the limitation “obtain one or more renderings of a control device for the content receiver”, wherein the Specification likewise fails to disclose how said information is obtained. All dependent claims are rejected by virtue of their dependencies on the independent claims. Furthermore, dependent claims 4, 7, 11, 12, 13, 14, 17, 18, and 20 are further rejected for similar reasoning, as the Specification fails to describe how action data, contextual information that includes a notification currently displayed, user support interactions, call center interaction embeddings, educational content/information, and user input requesting a state change of the content receiver are obtained, as required by the claims. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 8, 11, and 17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 8 recites the limitation "the response" in ln. 3. There is insufficient antecedent basis for this limitation in the claim. A suggested amendment is as follows: “an output of the educational artificial intelligence model”. Claim 11 recites in part “obtaining a plurality of user support interactions, wherein each user support interaction in the plurality of user support interactions includes contextual information regarding a content receiver; comparing the contextual information to one or more user support interactions of the plurality of user support interactions”. It is indefinite as to whether “the contextual information” recited in ln. 5 of the claim is referring to the contextual information of the plurality of user support interactions previously recited in ln. 4 of the claim, the contextual information as recited in claim 1, from which claim 11 depends, or both. Claim 17 is rejected for similar reasoning (see claim 17, ln. 5). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea(s) without significantly more. Regarding claim 1, analyzed as representative claim: [Step 1] Claim 1 recites in part “A method”, which falls within the “process” statutory category of invention. [Step 2A – Prong 1] The claim recites a series of steps which can be practically performed by one or more humans through mental process (i.e., observation, evaluation, judgment, and/or opinion) (see MPEP 2106.04(a)(2)(III)) and/or certain methods of organizing human activity (i.e., managing personal behavior or relationships or interactions between people – including social activities, teaching, and following rules or instructions) (See MPEP 2106.04(a)(2)(II)). Claim 1 recites: A method comprising: obtaining contextual information associated with content currently displayed to a user using a content receiver (mental process: observation); determining to provide, to the user, educational content regarding the content receiver (mental process: observation/judgment/evaluation); creating, based on the contextual information, a prompt for an educational artificial intelligence model (mental process: evaluation); providing the prompt to the educational artificial intelligence model (mental process: observation/evaluation; human activity: interaction between individuals, e.g., teaching); causing educational content to be displayed to the user based on output from the educational artificial intelligence model (mental process: evaluation; human activity: interaction between individuals, e.g., teaching); and training the educational artificial intelligence model based on user input received in response to displaying the educational content (mental process: evaluation; human activity: interactions between individuals, e.g., teaching). The limitations encompass, under their broadest reasonable interpretation, mental processes and/or certain methods of organizing human activity, as indicated above, but for the recitation of generic machinery (educational artificial intelligence model). For example, a human could visually observe contextual information associated with content displayed to another user, mentally determine that the user needs assistance/could benefit from educational content or guidance (i.e., visually observe and mentally determine that the user is struggling or has a question), mentally create and provide a prompt (e.g., “Do you need some demonstration on how to proceed?”), output educational content accordingly (e.g., a paper handout with drawings indicating how to do certain things), and receive user feedback regarding the educational content in order to further tailor the guidance. Thus, the claim recites an abstract idea(s). [Step 2A – Prong 2] The claim fails to recite additional limitations to integrate the abstract idea(s) into a practical application. The use of an educational artificial intelligence (“AI”) model is recited at a high level of generality and amounts to no more than the use of generic machinery (i.e., off-the-shelf AI model) as a tool to perform the abstract idea(s). There is no indication that the combination of elements improves the functionality of a computer or other technology (See MPEP 2106.05(a)), recites a “particular machine” to apply or use the abstract idea (See MPEP 2106.05(b)), recites a particular transformation of an article to a different thing or state (See MPEP 2106.05(c)), or recites any other meaningful limitation (See MPEP 2106.05(e)). Accordingly, the claim is directed to the abstract idea(s). [Step 2B] As discussed above with respect to integration of the abstract idea(s) into a practical application, the claim does not further include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional limitation of an educational AI model amounts to no more than use of generic machinery. The claim utilizes off-the-shelf AI technology as a tool for implementing the abstract idea(s). Taking the claim elements separately, the functions performed by the AI model are devoid of technical/technological implementation details. Further, the limitations, when taken in combination, add nothing that is not already present when looking at the elements taken individually. There is no indication that the limitations improve computer capabilities or improve an existing technology (see e.g., Recentive Analytics, Inc. v Fox Corp., 134 F.4th 1205 (Fed. Cir. 2025) (steps incidental to automating an abstract idea were not sufficient to confer eligibility, and further, noting that the application of existing technology to a novel database does not create patent eligibility)). The artificial intelligence technology as described is conventional and/or well-understood, as demonstrated by the Specification ([0023], “In some embodiments, educational AI module 214 includes one or more generative artificial intelligence models such as generative pre-trained transformer (GPT)-3.5, GPT-4, Claude, Perplexity AI, Google Bard, Sora, etc., or a combination thereof.”). Therefore, the claim is not patent eligible. Independent claim 16 recites a system comprising one or more memories configured to collectively store instructions, and one or more processors configured to collectively execute the stored instructions to perform the “obtaining”, “determining”, “creating”, “providing”, and “causing” steps of claim 1 analyzed above, while independent claim 19 recites one or more non-transitory computer-readable media that collectively store instructions executable by a processor to perform the actions of claim 1 analyzed above. These additional limitations are recited at a high level of generality such that they do not amount to a particular machine or technical improvement thereof, nor do they represent an improvement in technology. Rather, the generic manner in which the additional elements are claimed amount to mere instructions to implement the abstract idea(s) in a computer environment and/or utilize generic computing components as tools to perform the abstract ideas (see Specification, [0095], “Processor 822 includes one or more processors, processing units, programmable logic, circuitry, or other computing components”; [0096], “Memory 804 may include one or more various types of non-volatile or volatile storage technologies. Examples of memory 804 include, but are not limited to, flash memory, hard disk drives, optical drives, solid-state drives, various types of random-access memory (“RAM”), various types of read-only memory (“ROM”), other computer-readable storage media (also referred to as processor-readable storage media), or other memory technologies, or any combination thereof.”). Moreover, claim 16 further recites “obtain one or more renderings of a control device for the content receiver”, wherein the prompt is further created based on the one or more renderings of the control device. This limitation, under its broadest reasonable interpretation, encompasses a mental process (observation, evaluation) and/or insignificant extra-solution activity (i.e., data gathering), which does not integrate the abstract idea(s) into a practical application or provide significantly more (i.e., an inventive concept). Thereby, claims 16 and 19 are also not patent eligible. Claims 2-15, 17-18, and 20 are dependent on claims 1, 16, and 19, respectively, and therefore recite the same abstract idea(s) noted above. While dependent claims 2-15, 17-18, and 20 may have a narrower scope than the independent claims, the claims fail to recite additional limitations that would integrate the abstract idea(s) into a practical application or provide significantly more (i.e., an inventive concept). Therefore, claims 2-15, 17-18, and 20 are also not patent eligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-7, 9-14, and 16-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Hatambeiki et al. (U.S. Pub. 2022/0394348 A1) (hereinafter “Hatambeiki”). Regarding claim 1, Hatambeiki discloses a method ([0002-0003]; [0015], method to provide technical support and/or recommendation services to a customer) comprising: obtaining contextual information associated with content currently displayed to a user using a content receiver (Fig. 8; [0005]; [0016]; [0037-0038]; [0044-0045]; [0047]; [0050]; [0081]; [0086], monitoring consumer interaction with content displayed using a cable set top box and/or appliance data, including responses or lack thereof, to obtain contextual information); determining to provide, to the user, educational content regarding the content receiver ([0016]; [0037]; [0043-0044]; [0047]; [0083]; [0085], wherein monitoring consumer activity (e.g., on a controlling device or via spoken keywords) may indicate a probable issue for which guidance (educational content) is determined to be needed, wherein the guidance may be regarding an appliance such as a cable set top box combined with a digital video recorder); creating, based on the contextual information, a prompt for an educational artificial intelligence model ([0023]; [0037-0039]; [0043]; [0047-0049]; [0083-0086]; [0100], wherein the gathered contextual information is used to prompt a virtual agent); providing the prompt to the educational artificial intelligence model ([0023]; [0037-0039]; [0047]; [0083-0086]; [0089-0090]; [0100], e.g., wherein prompts received from the consumer (i.e., voice prompts) are provided to the virtual agent to allow for the issuance of commands in response); causing educational content to be displayed to the user based on output from the educational artificial intelligence model ([0023]; [0044-0046]; [0083], e.g., provide necessary information to the consumer to present instructions including pictures and other graphical elements that help the consumer); and training the educational artificial intelligence model based on user input received in response to displaying the educational content ([0037]; [0106], wherein a feedback loop is used to train the system). Regarding claim 2, Hatambeiki further discloses wherein determining to provide educational content regarding the content receiver comprises: receiving, from the user, a request for educational content regarding the content receiver ([0016]; [0037]; [0043-0044]; [0047]; [0083-0085]; [0090], wherein the monitoring consumer activity (e.g., on a controlling device or via spoken keywords) includes receiving a support request (i.e., a specific question or utterance) from the consumer regarding an appliance such as a cable set top box combined with a digital video recorder). Regarding claim 3, Hatambeiki further discloses wherein determining to provide educational content regarding the content receiver comprises: detecting an action of the user that indicates that the user requires educational content to control the content receiver ([0016]; [0037]; [0043-0044]; [0047]; [0083]; [0085], wherein the monitoring consumer activity (e.g., on a controlling device or via spoken keywords) may indicate a probable issue for which guidance (educational content) is determined to be needed, wherein the guidance may be regarding an appliance such as a cable set top box combined with a digital video recorder). Regarding claim 4, Hatambeiki further discloses wherein determining to provide educational content regarding the content receiver comprises: obtaining action data that indicates an action of the user ([0016]; [0037]; [0043-0044]; [0047]; [0083]; [0085], monitoring consumer activity (e.g., on a controlling device or via spoken keywords)); creating, using the action data, a prompt to query a detection artificial intelligence model to determine whether the action of the user indicates that the user requires educational content to control the content receiver ([0016]; [0037]; [0043-0044]; [0046-0047]; [0083]; [0085], wherein a consumer statement (e.g., “why won’t the movie play?”) is received by a virtual voice assistant indicative of a problem); providing the prompt to the detection artificial intelligence model ([0016]; [0037]; [0043-0044]; [0046-0047]; [0083]; [0085], wherein the consumer statement (e.g., “why won’t the movie play?”) is received by a virtual voice assistant indicative of a problem); and determining to provide the educational content based on output from the detection artificial intelligence model ([0016]; [0037]; [0043-0044]; [0046-0047]; [0083]; [0085], wherein based on the consumer activity, it may be determined that an error occurs and the consumer may be prompted to perform correction instructions and/or recommendations correspondingly generated as needed). Regarding claim 5, Hatambeiki further discloses wherein training the educational artificial intelligence model based on user input received in response to displaying the educational content comprises: creating, based on the user input, a feedback prompt to query a feedback artificial intelligence model to determine a level of satisfaction of the user with the educational content ([0089]; [0096-0100]; [0106], wherein customer sentiment (level of satisfaction) can be determined based on user input (e.g., words used, tone, speed, etc.)); providing the feedback prompt to the feedback artificial intelligence model ([0089]; [0096-0100]; [0106], wherein a virtual agent detects the user input); and training the educational artificial intelligence model based on output of the feedback artificial intelligence model ([0089]; [0095-0100]; [0106], wherein the feedback loop can be used to train the system). Regarding claim 6, Hatambeiki further discloses creating, based on the user input, a feedback prompt for a feedback artificial intelligence model to determine a level of satisfaction of the user with the displayed instructional content ([0089]; [0096-0100]; [0106], wherein customer sentiment (level of satisfaction) can be determined based on user input (e.g., words used, tone, speed, etc.)); providing the feedback prompt to the feedback artificial intelligence model ([0089]; [0096-0100]; [0106], wherein a virtual agent detects the user input); and creating a subsequent prompt for the educational artificial intelligence model based on output of the feedback artificial intelligence model ([0089]; [0095-0100]; [0106], e.g., prompting the virtual agent to hand over the conversation). Regarding claim 7, Hatambeiki further discloses wherein obtaining contextual information associated with content currently displayed to the user comprises: obtaining contextual information that includes a notification currently displayed to the user (Fig. 8; [0005]; [0037-0039]; [0042]; [0044-0045]; [0047]; [0050]; [0081], wherein notifications (e.g., special illumination pattern, audible notification, or the like) may be provided to the consumer, such as to notify and prompt the consumer of software and/or apps for installation). Regarding claim 9, Hatambeiki further discloses wherein creating the prompt for the educational artificial intelligence model comprises: including, in the prompt, one or more instructions that direct the educational artificial intelligence model to perform actions comprising: attempting to obtain existing educational content regarding the content receiver based on the contextual information; and causing the educational content to be displayed based on the existing educational content ([0016]; [0031]; [0037]; [0043]; [0049]; [0083], wherein the output of the virtual agent guiding the consumer in the performance of corrective measures includes consideration of capability information for the appliance(s) (e.g., cable set top box), such as an owner’s manual, engineering specification, diagrams, etc., and wherein the output to the consumer may include pictures or other graphical elements that help the consumer). Regarding claim 10, Hatambeiki further discloses wherein creating the prompt for the educational artificial intelligence model comprises: including, in the prompt, one or more instructions that direct the educational artificial intelligence model to perform actions comprising: attempting to obtain existing educational content regarding the content receiver based on the contextual information; in response to failing to locate existing educational content based on the contextual information, generating educational content response to the contextual information; and causing the educational content to be displayed based on the generated educational content ([0016]; [0023]; [0031-0032]; [0037]; [0044-0046]; [0051]; [0083], wherein the output of the virtual agent guiding the consumer in the performance of corrective measures may be generated without the need to reference a particular database (capability information), and wherein the instructions/corrective measures are generated and provided to the consumer (e.g., in the form of pictures or other graphical elements)). Regarding claim 11, Hatambeiki further discloses – as best understood in light of the rejection under 35 U.S.C. 112(b) presented above – wherein creating the prompt for the educational artificial intelligence model comprises: obtaining a plurality of user support interactions, wherein each user support interaction in the plurality of user support interactions includes contextual information regarding a content receiver ([0016]; [0037]; [0043-0044]; [0047]; [0083]; [0085], wherein consumer interactions (e.g., on a controlling device or via spoken keywords) are obtained and which are indicative of an issue (e.g., regarding a cable set top box) for which guidance is needed); comparing the contextual information to one or more user support interactions of the plurality of user support interactions; selecting, based on the comparison, a user support interaction of the one or more user support interactions; and creating the prompt based on the selected user support interaction ([0016]; [0023]; [0037-0039]; [0043-0044]; [0047-0049]; [0083-0086]; [0100], wherein the consumer activity (e.g., on a controlling device or via spoken keywords) is analyzed with respect to appliance responses or lack thereof and a prompt is created accordingly). Regarding claim 12, Hatambeiki further discloses wherein creating the prompt for the educational artificial intelligence model comprises: obtaining a plurality of call center interaction embeddings based on a plurality of call center interactions ([0005]; [0089]; [0092], wherein the contextual information gathered by the virtual agent can be a transcript and/or a recording (in whole or in part) of the conversation with the consumer (i.e., call center interactions), which can be augmented with additional information as needed (e.g., scale an issue or utterance into a specific category)); creating an embedding of the contextual information ([0005]; [0089]; [0092], wherein the contextual information gathered by the virtual agent can be a transcript and/or a recording (in whole or in part) of the conversation with the consumer, which can be augmented with additional information as needed (e.g., scale an issue or utterance into a specific category)); comparing the embedding of the contextual information to each call center interaction embedding in the plurality of call center interaction embeddings; selecting, based on the comparison, one or more call center interaction embeddings of the plurality of call center interaction embeddings to include in the prompt; and creating the prompt based on the one or more call center interaction embeddings ([0089]; [0097], wherein a sentiment of the customer can be deduced and used to prompt the virtual agent (e.g., provide a different dialog flow and/or action, or lead to hand off to a human)). Regarding claim 13, Hatambeiki further discloses wherein creating the prompt for the educational artificial intelligence model comprises: obtaining educational content regarding the content receiver; creating an embedding of the educational content; and creating the prompt using the embedding of the educational content ([0031-0032]; [0041-0042]; [0083]). Regarding claim 14, Hatambeiki further discloses after changing a state of the content receiver in response to educational content displayed to the user, obtaining user input requesting a state of the content receiver be reverted to a backup state; and causing the state of the content receiver to be reverted to the backup state ([0015]; [0089]; [0096], wherein the conversation may be handed off to a human agent (a backup state) in response to consumer input). Regarding claim 16, Hatambeiki discloses a system ([0002-0003]; [0015], system for providing technical support and/or recommendation services to a customer) comprising: one or more memories configured to collectively store instructions (Figs. 5-6; [0026-0027]; [0029]); one or more processors configured to collectively execute the stored instructions (Figs. 5-6; [0026-0027]; [0029]) to: determine to provide, to a user, an educational animation regarding a content receiver ([0016]; [0037]; [0043-0044]; [0047]; [0083]; [0085], wherein monitoring consumer activity (e.g., on a controlling device or via spoken keywords) may indicate a probable issue for which guidance (educational content) is determined to be needed, wherein the guidance may be regarding an appliance such as a cable set top box combined with a digital video recorder and may be in the form of pictures or other graphical elements (educational animation)); obtain contextual information associated with a current state of the content receiver (Fig. 8; [0005]; [0016]; [0037]; [0044-0045]; [0047]; [0050]; [0081], monitoring consumer interaction with content displayed and/or appliance (e.g., cable set top box) responses or lack thereof to obtain contextual information); obtain one or more renderings of a control device for the content receiver (Fig. 8; [0016]; [0088]); create, based on the contextual information and the one or more renderings of the control device, a prompt for an educational artificial intelligence model that requests the educational animation (Fig. 8; [0016]; [0023]; [0037-0039]; [0043]; [0047-0049]; [0083-0086]; [0088]; [0100], wherein the gathered contextual information is used to prompt a virtual agent); provide the prompt to the educational artificial intelligence model ([0023]; [0037-0039]; [0047]; [0083-0086]; [0089-0090]; [0100], e.g., wherein prompts received from the consumer (e.g., on a controlling device or via spoken keywords) are provided to the virtual agent to allow for the issuance of commands in response); and cause the educational animation to be displayed to the user based on output from the educational artificial intelligence model ([0023]; [0044-0046]; [0083], e.g., provide necessary information to the consumer to present instructions including pictures and other graphical elements that help the consumer). Regarding claim 17, Hatambeiki further discloses – as best understood in light of the rejection under 35 U.S.C. 112(b) presented above – wherein the one or more processors create the prompt for the educational artificial intelligence model by being further configured to: obtain a plurality of user support interactions, wherein each user support interaction in the plurality of user support interactions includes contextual information regarding a content receiver ([0016]; [0037]; [0043-0044]; [0047]; [0083]; [0085], wherein consumer interactions (e.g., on a controlling device or via spoken keywords) are obtained and which are indicative of an issue (e.g., regarding a cable set top box) for which guidance is needed); compare the contextual information to each user support interaction of the plurality of user support interactions; select, based on the comparison, one or more user support interactions of the plurality of user support interactions; and create the prompt based on the one or more user support interactions ([0016]; [0023]; [0037-0039]; [0043-0044]; [0047-0049]; [0083-0086]; [0100], wherein the consumer activity (e.g., on a controlling device or via spoken keywords) is analyzed with respect to appliance responses or lack thereof and a prompt is created accordingly). Regarding claim 18, Hatambeiki further discloses wherein the one or more processors determine to provide educational content regarding the content receiver by being further configured to: obtain action data that indicates an action of the user ([0016]; [0037]; [0043-0044]; [0047]; [0083]; [0085], monitoring consumer activity (e.g., on a controlling device or via spoken keywords)); create, using the action data, a prompt to query a detection artificial intelligence model to determine whether the action of the user indicates that the user requires educational content to control the content receiver ([0016]; [0037]; [0043-0044]; [0046-0047]; [0083]; [0085], wherein a consumer statement (e.g., “why won’t the movie play?”) is received by a virtual voice assistant indicative of a problem); provide the prompt to the detection artificial intelligence model ([0016]; [0037]; [0043-0044]; [0046-0047]; [0083]; [0085], wherein the consumer statement (e.g., “why won’t the movie play?”) is received by a virtual voice assistant indicative of a problem); and determine to provide the educational content based on output from the detection artificial intelligence model ([0016]; [0037]; [0043-0044]; [0046-0047]; [0083]; [0085], wherein based on the consumer activity, it may be determined that an error occurs and the consumer may be prompted to perform correction instructions and/or recommendations correspondingly generated as needed). Regarding claim 19, claim 19 is one or more non-transitory computer-readable media that collectively store instructions executable by a processor to perform the actions recited claim 1, and therefore is rejected for the same reasoning as claim 1. Regarding claim 20, Hatambeiki further discloses obtaining educational information regarding the content receiver; comparing the contextual information to the educational information; selecting, based on the comparison, a portion of the educational information to include in the prompt; and creating the prompt based on the portion of the educational information ([0031-0032]; [0041-0042]; [0083]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Hatambeiki in view of Pedersen et al. (U.S. Pub. 2024/0386313 A1) (hereinafter “Pedersen”). Regarding claim 8, Hatambeiki may not further disclose wherein creating the prompt for the educational artificial intelligence model comprises: including, in the prompt, one or more instructions to format the response in a specified style. However, Pedersen, directed to an artificial intelligence chatbot ([0001-0002]), teaches this limitation ([0109], wherein prompts specify a desired format or structure of the response of the model). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to include in the prompt of Hatambeiki one or more instructions to format the response in a specified style, as taught by Pedersen, in order to formulate an effective and/or tailored prompt that generates desired outputs (Pedersen, [0109]). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Hatambeiki in view of “How to pair Roku Streaming Stick 4K Remote”, 6 pages, uploaded on Mar. 28, 2023 by user “@iwanttoreadwithbrett” (hereinafter “Roku”). Regarding claim 15, Hatambeiki may not further explicitly disclose wherein causing educational content to be displayed to the user based on output from the educational artificial intelligence model comprises: causing the content receiver to enter a learning mode wherein the educational content is displayed alongside the content currently displayed to the user; and in response to detecting user input that indicates that the user understands the educational content, causing the content receiver to exit the learning mode. However, Roku teaches entering a learning mode, and in response to detecting user input that indicates that the user understands the educational content, causing the content receiver to exit the learning mode (pp. 1-6, wherein educational content teaching a user how to pair a remote is presented on the display screen as part of setup content, and upon detecting user input pairing the remote, the learning mode is exited). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to enter a learning mode upon displaying the educational content to the user which may be exited upon detecting user input indicative that the user understands the educational content, as taught by Roku, in the invention of Hatambeiki, which similarly presents educational content in the form of pictures or other graphical elements, in order to provide aid to the user for specific actions. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALYSSA N BRANDLEY whose telephone number is (571)272-4280. The examiner can normally be reached M-F: 8:30am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dmitry Suhol, can be reached at (571)272-4430. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALYSSA N BRANDLEY/Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

May 07, 2024
Application Filed
Feb 05, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597363
STEERING WHEEL CONNECTOR FOR AUTOMOTIVE SIMULATOR
2y 5m to grant Granted Apr 07, 2026
Patent 12592308
SYSTEM AND METHOD FOR AN ARTIFICIAL INTELLIGENCE ENGINE THAT USES A MULTI-DISCIPLINARY DATA SOURCE TO DETERMINE COMORBIDITY INFORMATION PERTAINING TO USERS AND TO GENERATE EXERCISE PLANS FOR DESIRED USER GOALS
2y 5m to grant Granted Mar 31, 2026
Patent 12564762
PHYSICAL ACTIVITY MONITORING AND MOTIVATING WITH AN ELECTRONIC DEVICE
2y 5m to grant Granted Mar 03, 2026
Patent 12567341
ORIENTATION ASSISTANCE SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12532953
COLOR CHART AND METHOD FOR THE MANUFACTURE OF SUCH A COLOR CHART
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
56%
Grant Probability
94%
With Interview (+38.2%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 161 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month