Prosecution Insights
Last updated: April 19, 2026
Application No. 18/763,493

SYSTEM AND METHOD FOR STORYTELLING EXPERIENCE

Non-Final OA §101§103§112
Filed
Jul 03, 2024
Examiner
FOSTER JR., MICHAEL ALAN
Art Unit
2654
Tech Center
2600 — Communications
Assignee
Universal City Studios LLC
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
3 currently pending
Career history
3
Total Applications
across all art units

Statute-Specific Performance

§101
28.6%
-11.4% vs TC avg
§103
57.1%
+17.1% vs TC avg
§102
7.1%
-32.9% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is sent in response to Applicant’s communication received on 7/3/2024 for the application number 18763493. The office hereby acknowledges receipt of the following placed of record in the file: Specification, Abstract, Oath/Declaration and claims. Status of the claims Claims 1-20 are presented for examination. Information Disclosure Statement The information disclosures submitted on 7/3/2024 & 12/18/2025 were before the mailing data of the first office action. The /submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 6 recites the limitation "the determined sentiment of the speech". There is insufficient antecedent basis for this limitation in the claim. The sentiment of speech is not mentioned in claim 1 which claim 6 depends on. This issue could be amended by changing claim 6 to depend on claim 5. Claim Objections Claim 14 is objected to because of the following informalities: "in an amusement park during;" is unclear and reads as an unfinished sentence. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 re rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because as explained below. Claim 1 recites a system for creating a storytelling experience comprising: a microphone, a display, a speaker, a computing device comprising: processing circuitry; memory, accessible by the processing circuitry and storing instructions that, when executed by the processing circuitry, cause the processing circuitry to perform operations comprising: (a) Receiving, from the microphone the data representative of the speech; (b) performing natural language understanding (NLU) on the received data to determine a semantic meaning of the speech; (c) providing an input to a large language model (LLM), wherein the input comprises the received data, the determined semantic meaning of the speech, or any combination thereof; (d) Receiving, from the LLM, an output comprising a visualization and corresponding audio generated by the LLM based on the input; (e) Causing the visualization generated by the LLM to be displayed via the display; and (f) Causing the audio generated by the LLM to be played via the speaker. Step (a) is a data gathering activity. This step can be done by a human as one person can speak, and the other person can receive the data representative of speech. Step (b) can be performed by a human as a person can understand the meaning of the received speech. Step (c) can be performed by a human as a human can provide the semantic meaning of speech to the LLM (additional element). Step (d) can be performed by a human (instead of the LLM) as a receiving person can respond with audio and a visualization (drawing/text). The LLM, display, and speakers all comprise additional elements. Step 1: This part of the eligibility analysis evaluates whether the claim falls within any statutory category. See MPEP 2106.03. The claim recites at least system. Thus, the claim is a machine, which is one of the statutory categories of invention. (Step 1: YES). Step 2A, Prong One: This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim. As discussed above, the broadest reasonable interpretation of steps (a)-(d) recites a mental process. Specifically, step (a) can be done by a human as one person can speak, and another person can receive the data representative of speech. Step (b) can be performed by a human as a person can understand the semantic meaning of the received speech. Step (c) can be performed by a human as one can provide to another person the semantic meaning of speech. Step (d) can be performed by a human as a person can receive the data and respond with audio and drawing/text. Hence the claim encompasses mental processes practically performed in the human mind by observation, evaluation, judgement, and opinion. See MPEP 2106.04(a)(2), subsection III.. (Step 2A, Prong One: YES). Step 2A, Prong Two: This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d). The claim recites a microphone configured to collect data representative of speech; a display configured to display visualizations; a speaker configured to play audio; and a computing device, comprising: processing circuitry; and memory, and an LLM. However, microphones, displays and speakers are all generic computer components. The LLM as described ([0025]: “…a computational model capable of natural language understanding”) is a generic computer component as recited in the specification. The system, processing circuitry and memory are all recited at a high level of generality. The user interface and the action of evoking a response is used to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. (Step 2A, Prong Two: NO), and the claim is directed to the judicial exception. (Step 2A: YES). Step 2B: This part of the eligibility analysis evaluates whether the claim as a whole amounts to significantly more than the recited exception i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. As explained with respect to Step 2A, Prong Two, the LLM, display, and speakers comprise additional elements that do not contribute to the patentability of the claim as a whole. The additional element of the “LLM” in limitations (d)-(e) are at best mere instructions to “apply” the abstract ideas, which cannot provide an inventive concept. See MPEP 2106.05(f). At Step 2B, the evaluation of the insignificant extra-solutional activity consideration takes into account whether or not the extra-solutional activity is well understood, routine, and conventional in the field. See MPEP 2106.05(g). As known in the art these elements are well routine and conventional. Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and insignificant extra-solutional activity which do not provide an inventive concept. The claim is not patent eligible. Claims 2-7, 17-20 recite mental processes since humans can perform these steps mentally. For example, a human can “receive the imaging data”. A human can “perform gesture analysis” and identify one or more gestures. Regarding claim 8, analysis applicable to claim 1, is applicable. In addition, claim 8 requires performing a sentiment analysis on the received data to determine a sentiment of the speech. A human can determine the sentiments from the speech and hence a mental step under Step 2A, prong One. Claims 9-15 are recited with a high level of generality. The steps recited merely require the non-transitory medium to perform the given task where the task performed is an abstract idea. Regarding claim 16, analysis analogous to claim 8 is applicable. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claim(s) 1-5, 8-14, 16-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Baeuml et al. (US 20230343323) in view of Huang et al. (U.S. 20200126584). Regarding claim 1, Baeuml teaches a system for creating a storytelling experience ([0005]: “…enabling an automated assistant…”). It also teaches a microphone configured to collect data representative of speech (paragraph [0027]: “the client device 110 may be equipped with one or more …”); a display configured to display visualizations ([0028]: “the client device 110 may be equipped with a display…”); a computing device comprising processing circuitry and memory, accessible by the processing circuitry and storing instructions that, when executed by the processing circuitry, cause the processing circuitry to perform operations ([0034]: “… one or more memories for storage of data…”); receiving the data representative of speech from a microphone ([0036]: “streams of audio data that capture spoken…”); performing natural language understanding (NLU) on the received data to determine a semantic meaning of the speech ([0001]: “a natural language understanding (NLU)…”); providing an input to a large language model (LLM), wherein the input comprises the received data ([0074]: “…based on processing the stream of audio data…”); receiving, from the model , an output comprising a visualization and corresponding audio generated by the model based on the input ([Abstract]: “…the given assistant output…”); causing the visualization generated by the model to be displayed via the display ([0152]: “…in controlling a display…”); and causing the audio generated by the model to be played via the speaker ([0152]: “…via one or more speakers…) Baeuml does not teach wherein the input comprises, the determined semantic meaning of the speech, or any combination thereof. However, Huang teaches wherein the input comprises the determined semantic meaning of the speech, or any combination thereof ([Claim 1]: “...being based on recognition of: semantic information…”). Therefore, it would have been obvious to a person having ordinary skill in the art (PHOSITA) at the time of the claimed invention to modify Baeuml to incorporate the teachings of Huang before the effective filing date to utilize the semantic meaning in the input to determine the sentiment of the text so that LLM can generate better results ([0054]: “…extracts sentiment information…”); and to determine sentiment data in order to achieve more accurate data for customer satisfaction and user feedback ([0050]: “…sentiment information can describe…”). Regarding claim 2, Baueml as above in claim 1, teaches an imagine system comprising an imaging sensor configured to collect imaging data; receiving the imagine data from the imaging sensor; and performing gesture analysis on the imagine data; wherein the imagine data comprises the imaging data, the one or more identified gestures, or any combination thereof ([0027]: “…and/or certain movements (e.g., gestures)....”). Regarding claim 3, using the provided definition of keepsake (a video, an image, a sticker, a calendar, etc.), Baueml teaches receiving from the LLM, an image or video which under broadest reasonable interpretation could include a design for a keepsake ([0152]: “…causing the stream of visual cues…”). Regarding claim 4, Baeuml as above in claim 3, teaches wherein the keepsake comprises a video, an image, a sticker, a calendar, a book, a shirt, a hat, a pin, or any combination thereof ([0152]: “…causing the stream of visual cues…”). Regarding claim 5, Baeuml as above in claim 1, does not teach the performing of a sentiment analysis on the received data to determining a sentiment of the speech. However, Huang teaches the performing of a sentiment analysis on the received data to determining a sentiment of the speech ([0003]: “…one or more sentiments…”). It would be obvious to a PHOSITA to modify Baeuml to incorporate the teachings of Huang before the effective filing date to determine sentiment data in order to achieve more accurate data for customer satisfaction and user feedback ([0050]: “…sentiment information can describe…”). Regarding claims 8, Baeuml teaches a non-transitory computer readable medium storing instruction that, when executed by the processing circuitry, cause the processing circuitry to perform operations ([00164]: “…non-transitory computer readable…”); receiving data representative of speech ([0001]: “…spoken utterance.”); performing natural language understanding (NLU) on the received data to determine a semantic meaning of the speech ([0001[: “…natural language understanding (NLU)…”); and receiving, from the LLM, a visualization generated by the LLM based on the input; and causing the visualization generated by the LLM to be displayed ([Abstract]: “…utilized in controlling a display…”). Baeuml does not teach performing a sentiment analysis on the received data to determine a sentiment of the speech; or wherein the input comprises the determined semantic meaning of the speech, the determined sentiment of the speech, or any combination thereof. However, Huang teaches performing a sentiment analysis on the received data to determine a sentiment of the speech ([0003]: “…sentiment information that conveys…”); and wherein the input comprises the determined semantic meaning of the speech, or any combination thereof ([Claim 1]: “...being based on recognition of: semantic information…”). It would be obvious to a PHOSITA to modify Baeuml to incorporate the teachings of Huang before the effective filing date to utilize the semantic meaning in the input to determine the sentiment of the text so that LLM can generate better results ([0054]: “…extracts sentiment information…”); and to determine sentiment data in order to achieve more accurate data for customer satisfaction and user feedback ([0050]: “…sentiment information can describe…”). Regarding claims 9, Baeuml as above in claim 8, teaches the medium wherein the NLU comprises applying one or more NLU algorithms or rule sets to the received data to determine the semantic meaning of the speech based on one or more words identified in the speech. ([0036]: “…e.g., a long short-term memory (LSTM)…”). Regarding claim 10, Huang as above in claim 8, teaches the medium wherein the sentiment analysis comprises applying one or more sentiment-identifying algorithms or rule sets to the received data to determine the sentiment of the speech based on one or more words identified in the speech, tone of voice, intonation, or any combination thereof. ([0081]: “…the sentiment classification engine 206 corresponds…”). Regarding claims 11, Baeuml teaches the computer readable medium, wherein the received data representative of the speech comprises audio data ([0027]: “…spoken utterances…”). Regarding claim 12, Baeuml does not teach receiving data representative of speech in the form of a transcript. However, Huang teaches receiving data representative of speech in the form of a transcript. (Huang [0049]: “A speech recognizer engine 204 converts the stream of audio features received from the preprocessing engine 202 to text information.”). It would have been obvious to a PHOSITA to modify Baeuml before the effective filing date in such a way as to allow transcript inputs in order to gain the benefit generating images based on a text transcript as taught by Huang ([0052]: “…most closely match the input text…”). Regarding claim 13, Baeuml discloses a computer readable medium performing operations comprising: receiving context data representative of one or more actions performed by a user. In absence of a more specific definition of “action” the broadest reasonable interpretation includes actions such as walking away and coming back, changing locations, etc. (Baeuml [0013] “The automated assistant may differentiate between multiple dialog…”). Regarding claim 14, Baeuml teaches a computer readable medium wherein the operations comprise: receiving media, wherein the media comprises one or more images of a guest captured by an imaging system during a visit to an amusement park. ([0045]: “…corresponding streams of vision data…”). Regarding claim 16, Baeuml teaches a method for creating a storytelling experience ([0005]: “…enabling an automated assistant…”); receiving an image of a guest having an experience ([0045]: “…vision data capture a human…”); performing natural language understanding on the received data to determine a semantic meaning of the speech ([0001]: “a natural language understanding (NLU)…”); and receiving, from the LLM, a visualization generated by the LLM based on the input, wherein the visualization includes the image; and causing the visualization generated by the LLM to be displayed via a display ([Abstract]: “…utilized in controlling a display…”). Baeuml does not teach performing a sentiment analysis on the received data to determine a sentiment of the speech; or wherein the input determined semantic meaning of the speech, the determined sentiment of the speech, or any combination thereof. However, Huang teaches performing a sentiment analysis on the received data to determine a sentiment of the speech ([0003]: “…sentiment information that conveys…”); and wherein the input comprises the determined semantic meaning of the speech, or any combination thereof ([Claim 1]: “...being based on recognition of: semantic information…”). It would be obvious to a PHOSITA to modify Baeuml to incorporate the teachings of Huang before the effective filing date to utilize the semantic meaning in the input to determine the sentiment of the text so that LLM can generate better results ([0054]: “…extracts sentiment information…”); and to determine sentiment data in order to achieve more accurate data for customer satisfaction and user feedback ([0050]: “…sentiment information can describe…”). Regarding claim 17, arguments analogous to claim 13 are applicable. Regarding claim 18, Baeuml teaches receiving, from an imaging system, imaging data from the guest delivering the speech describing the experience; and performing a gesture analysis on the received imaging data to identify one or more gestures made by the guest; wherein the input comprises the imaging data, the one or more identified gestures, or any combination thereof. ([0027]: “…streams of vision data that capture images, videos, and/or certain movements (e.g., gestures) ....”). Regarding claim 19, Baeuml teaches receiving, from the LLM, a design for a keepsake to be provided to the guest, wherein the keepsake comprises a video, an image, a sticker, a calendar, a book, a shirt, a hat, a pin, or any combination thereof ([0152]: “…causing the stream of visual cues…”). Claims 6, 7 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Baeuml et al. (U.S. 20230343323) in view of Huang et al. (U.S. 20200126584), as applied to claims 1-5, 8-14, 16-19 above, and further in view of Garvey et al. (U.S. 20240320591). Regarding claim 6, Baeuml does not teach generating guest satisfaction data based on the received data, the determined semantic meaning of the speech, the determined sentiment of the speech, the LLM, or any combination thereof. However, Huang teaches the concept of determining semantic meaning of the speech, the determined sentiment of the speech, It would be obvious to a PHOSITA to modify Baeuml to incorporate the teachings of Huang before the effective filing date to utilize the semantic meaning in the input determine the sentiment of the text so that LLM can generate better results ([0054]: “…extracts sentiment information…”). Baeuml modified by Huang does not teach generating guest satisfaction data based on the received data. However, Garvey teaches the system wherein guest satisfaction data can be generated based on qualitative and quantitative data which under the broadest reasonable interpretation can include speech, or the semantic meaning of speech. (Garvey [0004]: “…qualitative and quantitative data...”). Garvey teaches that this can help to isolate or identify underperforming areas of a product and allow for design changes that are more accurate to improve the user experience. It would be obvious to a PHOSITA to modify Baeuml in such a way as to incorporate these teachings to help to isolate or identify underperforming areas of a product and allow for design changes that are more accurate to improve the user experience. ([0004]: “…qualitative and quantitative data...”). Regarding claim 7, as in claim 6, Baeuml does not teach a system wherein the guest satisfaction data identifies an aspect of a visit to an amusement park and an indication of a guest’s satisfaction with the aspect of the visit to amusement park indicated by the speech. However, Garvey teaches the system wherein guest satisfaction data identifies an aspect of a visit as indicated by speech. ([0015]: “In some embodiments, the systems...”). The satisfaction data can be useful for measuring and refining underperforming aspects of a product ([0004]: “…product or service that are underperforming…”). It would be obvious to a PHOSITA to modify Baeuml before the effective filing data in such a way as to incorporate these teachings to help to isolate or identify underperforming areas of a product and allow for design changes that are more accurate to improve the user experience. ([0004]: “…qualitative and quantitative data...”). Regarding claim 20, arguments analogous to claim 7 are applicable. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Baeuml et al. (U.S. 20230343323) in view of Huang et al. (U.S. 20200126584), as applied to claims 1-5, 8-14, and 16-19 above, and further in view of Agarwal et al. (U.S. 20250310326). Regarding claim 15, the combination of Baeuml and Huang does not teach the medium wherein the operations comprise: receiving media, wherein the media comprises one or more images of a guest, wherein the one or more images were captured by a mobile device belonging to the guest; wherein the input comprises the media. However, Agarwal teaches receiving media, wherein the media comprises one or more images of a guest ([Abstract]: “…user to upload an image…”). Therefore, it would have been obvious to a person having ordinary skill in the art at the time of the claimed invention to incorporate the teaching of Agarwal’s into the combination of Baeuml and Huang because it would achieve the added benefit of being able to verify the user’s identity and ensure more stringent security ([Abstract]: “…the user's identity…”). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL ALAN FOSTER JR. whose telephone number is (571)272-8874. The examiner can normally be reached T - F 7:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hai Phan can be reached at (571) 272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL A FOSTER JR/Examiner, Art Unit 2654 /HAI PHAN/Supervisory Patent Examiner, Art Unit 2654
Read full office action

Prosecution Timeline

Jul 03, 2024
Application Filed
Feb 26, 2026
Non-Final Rejection — §101, §103, §112
Mar 20, 2026
Interview Requested
Mar 27, 2026
Applicant Interview (Telephonic)
Apr 14, 2026
Examiner Interview Summary

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month