Prosecution Insights
Last updated: April 19, 2026
Application No. 17/735,920

Accessibility Enhanced Content Creation

Non-Final OA §102§103
Filed
May 03, 2022
Examiner
DOSHER, JULIE GRACE
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Disney Enterprises Inc.
OA Round
3 (Non-Final)
25%
Grant Probability
At Risk
3-4
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 25% of cases
25%
Career Allow Rate
3 granted / 12 resolved
-45.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
27 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
17.1%
-22.9% vs TC avg
§103
40.7%
+0.7% vs TC avg
§102
19.6%
-20.4% vs TC avg
§112
18.6%
-21.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments The previous objections to the claims 1 and 11 are withdrawn in light of the amendments to the claims, filed 12/05/2025. Applicant’s arguments with respect to the rejection of the claims under 35 U.S.C. § 102 have been fully considered and are persuasive (Remarks, filed 12/05/2025, pp. 7-9). Therefore, the rejections of claims 1-3, 6, 9, 11-13, 16, and 19 have been withdrawn. However, as necessitated by amendments, a new ground(s) of rejection of claims 1 and 11 under 35 U.S.C. § 102 has been raised, as presented in detail below. Similarly necessitated by amendments, a new ground(s) of rejection of claims 1-3, 6, 9, 11-13, 16, and 19 under 35 U.S.C. § 103 have also been raised, as presented in detail below. Concerning the rejection of the claims under 35 U.S.C. § 103, Applicant argues that dependent claims 4-5, 7-8, 10, 14-15, 17-18, and 20 should be allowable because the prior art of record fails to teach all elements of independent claims 1 and 11, which these claims depend upon and further limit (Remarks, filed 12/05/2025, pp. 9-11). Examiner respectfully directs Applicant’s attention to the new rejections of independent claims 1 and 11, as necessitated by amendments. See rejections of claims 1-20 under 35 U.S.C. § 103 as presented in detail below. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1 and 11 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by US 2013/0307786 (hereinafter “Heubel”). Regarding Claim 1 and related method Claim 11, Heubel discloses a processing hardware (par. 0019: “the processor 210”); and a system memory storing a software code (par. 0019: “the memory 220 stores program code or data, or both”); the processing hardware configured to execute the software code (par. 0019: “the processor 210 executes program code stored in memory 220”) to: receive a primary content (abstract: “electronic content is received by an electronic device”); execute at least one of a visual analysis or an audio analysis of the primary content (par. 0071: “tablet computer 320 may analyze the portion of the received content, such as… audio, and/or video, to determine whether an occurrence of one or more of the predefined events has occurred”); determine, based on executing the at least one of the visual analysis or the audio analysis, one or more word strings each corresponding to one of a plurality of haptic effects (par. 0015: “a predefined event can be the word "rain" being displayed on a display and the predefined event may be associated with a predefined haptic effect from the plurality of predefined haptic effects that is configured to provide a "raining" haptic effect;” par. 0052: “For example, a predefined event may comprise the word "crashing" being displayed on a display and the plurality of predefined haptic effects may contain a predefined haptic effect configured to provide a "crashing" sensation;” par. 0049: “a predefined event includes a particular phrase, variations of a particular word or phrase… in an image or pre-recorded video or live video stream”); generate an accessibility track including, the plurality of haptic effects configured to be actuated when the primary content reaches a location corresponding to each of the plurality of haptic effects (fig. 5: steps 540 and 550; par; 0078: “a signal is generated before an occurrence of a predefined event is determined to have occurred. For example, as a user scrolls through the electronic content, the processor 210 may generate a signal as a predefined event becomes closer to occurring”); synchronize the accessibility track to the primary content (par. 0080: “the processor 210 generates a signal configured to cause haptic output device 240 to output the predefined haptic effect associated the predefined event;” par. 0016: “electronic device 100 then identifies an occurrence of the predefined event by analyzing at least a portion of the electronic content. In response to identifying the occurrence of the predefined event, the predefined haptic effect or haptic effects associated with the predefined event can be generated;” Examiner notes the haptic effects occur at the same time as the associated event occurs within the primary content); and supplement the primary content with the accessibility track to provide an accessibility enhanced content (par. 0016: “In response to identifying the occurrence of the predefined event, the predefined haptic effect or haptic effects associated with the predefined event can be generated;” par. 0002: “physical tactile sensations which have traditionally been provided by mechanical buttons are no longer present in many such devices. Instead, haptic effects may be output by handheld devices to alert the user to various events”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 6, 9, 11-12, 16, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Bruner and further in view of Heubel. Regarding Claim 1 and related method Claim 11, Bruner discloses a processing hardware (fig. 1A; par. 0004: “through at least one processor configured to translate the received information;” par. 0075: “processors by which information may be processed”); and a system memory storing a software code (fig. 1B; par. 0005: “processing instructions stored in the processor readable memory;” par. 0075: “memory storage into which data may be saved”); the processing hardware configured to execute the software code (figs. 1A-1B; par. 0004: “through at least one processor configured to translate the received information;” par. 0005: “processing instructions stored in the processor readable memory”) to: receive a primary content (par. 0037: “translation platform may be configured to receive and process a variety of information in one of multiple different format (e.g. audio, video, images, textual elements as well as other forms of text-based speech such as subtitles… the translation platform can capture or receive information in formats such as, but not limited to, audio content, video content, image content, photographs, and/or other such information”); execute at least one of a visual analysis or an audio analysis of the primary content (par. 0037: “At optional step 203, received information is processed to identify information and/or speech elements to be translated. For example, voice recognition software can be applied to recorded audio to convert audio information into textual information. As another example, an image can be scanned and/or optical character recognition (OCR) can be performed to identify words, numbers, phrases and/or other such relevant information. Similarly, one or more frames and/or portions of a video can be scanned, OCR performed, image recognition preformed, tracking movement over multiple frames, or otherwise processed to identify information”); generate an accessibility track (par. 0037: “translation platform may be configured to receive and process a variety of information in one of multiple different format… and provide a translation to one or more translated formats (e.g., sign language, text, audio, video, and other such formats or combinations of such formats);” par. 0020: “The translation of information from one format to another format can be very beneficial to many users and/or audiences... benefit from alternative forms of accessibility translations;” par. 0025: “some embodiments are configured to receive information and translate that information into a corresponding sign-language translation, which may be manifested, for example, in one or more video clips, displayed through animation and/or presented by one or more avatars, and/or other such manifestations or combinations of such manifestations); synchronize the accessibility track to the primary content (figs. 3a-3d; par. 0021: “the Translation Platform allows for rapid conversion of Closed Captioning text feeds to a sign language video stream that may be displayed, for example, in a picture-in-picture box on a television or other display medium alongside the corresponding broadcast, thereby providing hearing impaired individuals with an alternative and possibly more effective accessibility option;” par. 0045: “In one implementation, the current word, phrase, letter, number, punctuation mark, and/or the like for which [a sign language] video clip is being played in the picture-in-picture window is also highlighted in the Closed Captioning display”); and supplement the primary content with the accessibility track to provide an accessibility enhanced content (pars. 0004-0005; figs. 3a-3d: the accessibility track is played alongside primary content). Bruner discloses a visual and/or audio analysis which is used to generate an accessibility track including a sign language performance (see above) but does not disclose an accessibility track including haptic effects. However, Heubel discloses determine, based on executing the at least one of the visual analysis or the audio analysis (par. 0071: “tablet computer 320 may analyze the portion of the received content, such as… audio, and/or video, to determine whether an occurrence of one or more of the predefined events has occurred”), one or more word strings each corresponding to one of a plurality of haptic effects (par. 0015: “a predefined event can be the word "rain" being displayed on a display and the predefined event may be associated with a predefined haptic effect from the plurality of predefined haptic effects that is configured to provide a "raining" haptic effect;” par. 0052: “For example, a predefined event may comprise the word "crashing" being displayed on a display and the plurality of predefined haptic effects may contain a predefined haptic effect configured to provide a "crashing" sensation;” par. 0049: “a predefined event includes a particular phrase, variations of a particular word or phrase… in an image or pre-recorded video or live video stream”); and generate an accessibility track including, the plurality of haptic effects configured to be actuated when the primary content reaches a location corresponding to each of the plurality of haptic effects (fig. 5: steps 540 and 550; par; 0078: “a signal is generated before an occurrence of a predefined event is determined to have occurred. For example, as a user scrolls through the electronic content, the processor 210 may generate a signal as a predefined event becomes closer to occurring;” par. 0016: “electronic device 100 then identifies an occurrence of the predefined event by analyzing at least a portion of the electronic content. In response to identifying the occurrence of the predefined event, the predefined haptic effect or haptic effects associated with the predefined event can be generated). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the generation of an accessibility track as disclosed by Bruner with the generation of haptic effects as disclosed by Heubel in order to introduce tactile sensations to the content (Heubel, par. 0002), which is widely understood to be an accessible feature, and/or to increase user immersion by applying relevant tactile effects to particular detected words (Heubel, pars. 0015-0017). Regarding Claim 2 and related method Claim 12, Bruner further discloses synchronize the accessibility track to the primary content contemporaneously with generating the accessibility track (par. 0140: “Further, some embodiments provide a conversion, translation or the like of substantially any type of communication and/or information into sign language or to another form of information data or communication [the accessibility track]. Additionally, the translation or conversion can be implemented in real time. Still further, in some implementations, information to be translated is capture in real time and the translation provided in substantially real time”). Regarding Claim 6 and related method Claim 16, Bruner modified by Heubel further discloses the primary content comprises audio content (Bruner, par. 0037: “The translation platform may be configured to receive and process a variety of information in one of multiple different format (e.g. audio, video, images, textual elements as well as other forms of text-based speech…)”), and wherein the plurality of haptic effects (see claim 1 above for explanation of combination/integration of haptic effects of Heubel with the disclosed accessibility track of Bruner) are generated based on the audio content using natural language processing (NLP) (Bruner, fig. 1A; par. 0029: “The translation platform 101 may contain a number of functional modules and data libraries. A platform controller 105 may orchestrate the reception and distribution of data to and from the translation platform, as well as between various other translation platform modules. A grammar engine 110 may communicate with a rules database 115, containing a collection of language grammar rules, to process contextual aspects of audio, textual and/or written speech inputs. A similarity engine 120 may communicate with a thesaurus database 125, containing lists of speech elements with associated synonyms, to find synonyms for input speech elements. A translation engine 130 may communicate with a sign language (SL) video library 135, containing video clips of words/phrases 140 and letters/numbers 145, and/or the like in sign language format, to produce sign language video clips corresponding to speech element inputs”). The combination of the generation of an accessibility track as disclosed by Bruner with the generation of haptic effects of Heubel described above for Claim 1 would have included that the haptic effects are ultimately generated from the audio content using the same processing method described by Bruner for other accessibility/multimedia tracks. Regarding Claim 9 and related method Claim 19, Bruner further discloses the accessibility track further comprises one or more video tokens, wherein each of the one or more video tokens includes a pre-produced video (figs. 3a-3d; par. 0025: “For example, some embodiments are configured to receive information and translate that information into a corresponding sign-language translation, which may be manifested, for example, in one or more video clips;” par. 0147: “In other implementations, the translation can be performed in pre-recoded content with or without time constraints on the translation process”), and wherein each of the one or more video tokens expresses a single word sign, a sequence of signs, or a shorthand representation of a sequence of signs (par. 0165: “the translation platform, when using pre-recorded videos for example, combines sign language video clip for "son" and the sign language video clip of the appropriate letter to provide the translation;” par. 0173: “the user interface can present the proposed translation (e.g., playback a sequence of pre-recorded videos”). Claims 3, 7, 13, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Bruner in view of Heubel as applied to claims 1 and 11 above, and further in view of US 2022/0343576 (hereinafter “Marey”). Regarding Claim 7 and related method Claim 17, Bruner discloses the accessibility track further includes a sign language performance performed using an animated model and a plurality of emotive data sets and utilized to control the animated model to perform emotions or gestures (figs. 3a-3d; par. 0025: “For example, some embodiments are configured to receive information and translate that information into a corresponding sign-language translation, which may be manifested, for example, in one or more video clips, displayed through animation and/or presented by one or more avatars, and/or other such manifestations or combinations of such manifestations;” par. 0050: “Similarly, when an avatar is used to present the sign language translation, facial expressions of the avatar, the pace of the avatar, the intensity of the avatar can be defined to portray some of these parameters and/or changes in parameters (e.g., detect based on volume, pace, intensity that a user is happy or mad, and modify the facial expressions of the avatar to reflect the detected parameter(s));” par. 0157: “Similarly, the facial expressions of the avatar can be adjusted to show emotion, such as narrowed eyes when angry, wide eyes when surprised, crying when sad and other such characteristics;” Examiner interprets that these facial expressions, emotions, and gestures necessarily come from some sort of data set), wherein the emotions or gestures are performed with at least one of a speed, forcefulness or emphasis (par. 0159: “numerous different parameters that can be obtained, tracked and/or modified over time that can be used in controlling and/or customizing the translation and/or presentation of the translation. Some of these parameters include, but are not limited to a speed of output and/or playback,… speed of translation, speed of presentation”). Bruner does not disclose that the plurality of emotive data sets correspond to the one or more word strings. However, Marey discloses the accessibility track further includes a sign language performance performed using an animated model and a plurality of emotive data sets corresponding to the one or more word strings and utilized to control the animated model to perform emotions or gestures (par. 0003: “the emotional state may be determined by at least spoken words of the first language;” par. 0005: “the translation application performs sentiment analysis, such as… an emotion identifier word in the spoken words (e.g., the word ‘happy’). The determined emotional state of the character is reflected in the face and body of the avatar to mimic the emotion of the character;” par. 0035: “a virtual avatar performing signs corresponding to the lines spoken in the video;” Examiner interprets that these facial expressions and emotions necessarily come from some sort of data set). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine emotive data sets corresponding to word strings as disclosed by Marey with the avatar performing sign language with emotions as disclosed by Bruner in order to offer a method of determining the intended sentiment/emotions based on the detected words (Marey, par. 0005) and/or to accurately reflect the intended emotion in the avatar (Marey, par. 0005). Regarding Claim 3 and related method Claim 13, Bruner further discloses the accessibility track comprises the sign language performance (par. 0147: “Accordingly, information is received, translated and provided to a user, for example, in sign language”), and the sign language performance is configured to be displayed as a picture-in-picture (PiP) overlay on the primary content (figs. 3a-3d; par. 0041: “In one embodiment, the translation platform may be configured to output SL video clips… the outputted video clips may be displayed as a picture-in-picture style window”). Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Bruner in view of Heubel and Marey as applied to claims 3 and 13 above, and further in view of Shintani. Regarding Claim 4 and related method Claim 14, Bruner does not disclose user selections for the PiP overlay. However, Shintani discloses the PiP overlay is configured to be repositioned or toggled on or off based on a user selection (abstract: “The sign language window can be selectively disabled by a user that does not wish to view the sign language video. Also, in some implementations, the user can move the sign language window to a desired location on the display”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the PiP overlay of Bruner with the user selections of Shintani in order to offer user preference features and therefore improve the user’s experience (Shintani, par. 0059). Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Bruner in view of Heubel and Marey as applied to claims 3 and 13 above, and further in view of Habili. Regarding Claim 5 and related method Claim 15, Bruner does not disclose alpha masking. However, Habili discloses the PiP overlay of the sign language performance employs alpha masking to show only a performer of the sign language performance, or the performer having an outline added for contrast (figs. 11-12: the steps of alpha masking are shown, and only a sign language performer remains in the video afterwards; fig. 8: the hands of the sign language performer have been outlined; pg. 1092: “The FHSM is a binary map where a binary “1” indicates a moving skin color region, and a binary “0” indicates a background pixel. The FHSM is analogous to the alpha map in the MPEG-4 standard”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the avatars performing sign language as disclosed by Bruner with the alpha masking of Habili in order to compress the video size (Habili, pg. 1086) and to enhance the appearance of the placement of the PiP overlay of the sign language performance (Habili, figs. 11-12). Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Bruner in view of Heubel and Marey as applied to claims 7 and 17 above, and further in view of Zelenin. Regarding Claim 8 and related method Claim 18, Bruner does not explicitly disclose the model changing orientation. However, Zelenin discloses the animated model changes orientation during a scene to appear as facing a camera (figs. 10, 11, 13A-13B, 17: animated models are shown moving and changing orientation to face the camera or to conversely look away from the camera). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the ability of avatars to change orientation as disclosed by Zelenin with the avatars of Bruner in order to increase the overall realism and immersion—and therefore the user experience (Zelenin, abstract; pars. 0004 and 0029). Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Bruner in view of Heubel and Marey as applied to claims 7 and 17 above, and further in view of Menefee. Regarding Claim 10 and related method Claim 20, Bruner does not disclose facial scanning. However, Menefee discloses the plurality of emotive data sets are derived from facial scanning (fig. 1: the device 110 scans the user’s (101) facial expression; par. 0047: “This process can be inverted by the device in that an outgoing communication of the second party, which now may also be in an audible language, is identified and translated for the first party. The device may output the translation as an incoming communication for the party as a type of visual language or a textual language. The device may input the visual language, audible language, facial expression, or textural language or input as an outgoing communication from the party… In some embodiments, the video rate of the sensor data capture may be selected based on the sign language input due to the increased complexity of some sign languages. The digital representation of the sign language communication may include one or more gestures, facial cues, body cues, or environmental factors;” par. 0059: “embodiments of the disclosed technology may leverage data that has previously been captured and digitized to reduce the amount of data that needs to be stored when the device is being used in real-time, either locally or in a remote setting.” Examiner interprets this to mean that there are sensors used for facial scanning, which retrieve data including facial cues and motions, and at least some of that data is stored in a data set). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the facial scanning of Menefee with the avatars of Bruner in order to more accurately depict emotions and other facial expressions since they are an important—and complex—part of communicating with sign language (Menefee, pars. 0004, 0044). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US 2019/0088270 (Malur) teaches a method of analyzing an input containing words (such as an audio file, a general video, or a video containing sign language) and determining the likelihood of various emotions being intended by the detected words. Groups of words associated with particular emotions are stored in emotive data sets. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JULIE DOSHER whose telephone number is (571) 272-4842. The examiner can normally be reached Monday - Friday, 10 a.m. - 6 p.m. ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dmitry Suhol can be reached at (571) 272-4430. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.G.D./Examiner, Art Unit 3715 /DMITRY SUHOL/Supervisory Patent Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

May 03, 2022
Application Filed
Jun 18, 2025
Non-Final Rejection — §102, §103
Sep 22, 2025
Response Filed
Oct 07, 2025
Final Rejection — §102, §103
Dec 05, 2025
Response after Non-Final Action
Dec 16, 2025
Request for Continued Examination
Feb 11, 2026
Response after Non-Final Action
Mar 05, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12548460
EQUIPOTENTIAL ZONE (EPZ) GROUNDING TRAINING LAB
2y 5m to grant Granted Feb 10, 2026
Patent 12525149
Ground Based Aircraft Wing and Nacelle Mockup Design for Training
2y 5m to grant Granted Jan 13, 2026
Patent 12491728
Skull Mounting Device
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
25%
Grant Probability
99%
With Interview (+100.0%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month