Prosecution Insights
Last updated: April 19, 2026
Application No. 18/776,248

SYSTEM FOR GENERATING SOUND LOCALIZED WITH RESPECT TO A LOCATION OF A FLOATING IMAGE

Non-Final OA §102§103
Filed
Jul 17, 2024
Examiner
LEE, PING
Art Unit
2695
Tech Center
2600 — Communications
Assignee
Disney Enterprises Inc.
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
94%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
454 granted / 692 resolved
+3.6% vs TC avg
Strong +29% interview lift
Without
With
+28.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
23 currently pending
Career history
715
Total Applications
across all art units

Statute-Specific Performance

§101
3.8%
-36.2% vs TC avg
§103
43.7%
+3.7% vs TC avg
§102
22.0%
-18.0% vs TC avg
§112
21.3%
-18.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 692 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 17 and 30 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Voris et al. (US 20180117465 A1; hereafter Voris). Regarding claim 1, Voris discloses a system, comprising: a first light source (Fig. 2C, 260) to emit first light (Fig. 2C, 261C), at least a portion of the first light processed to provide a first image at a first location (e.g., the location of flying bird shown in Fig. 2C); a second light source to (260) emit second light (Fig. 2C, 261C), at least a portion of the second light processed to provide a second image at a second location (e.g., ocean noise, or the location of flying bird at a different location, [0052], [0053]) different from the first location; a first sound source positioned separate from (as illustrated in Fig. 2A, speaker 252 and/or 256, e.g., is separated from location of the flying bird), but in acoustical communication with, the first location (location of the flying bird shown in Fig. 2C), the first sound source to output a first sound audible at the first location ([0022], speaker 252 is needed in combination with at least speaker 250 to generate a moving sound object from one side of the wall to the other side of the wall; furthermore, given the proximity of speaker 256 to the flying bird, the sound generated from speaker 256 is audible at the flying bird location as shown in Fig. 2C); a second sound source positioned separate from, but in acoustical communication with, the second location, the second sound source to output a second sound audible at the second location (location of the flying bird flying to other side other than the location as shown in Fig. 2C, or the ocean noise; furthermore, given the proximity of speaker 256 to the flying bird, the sound generated from speaker 256 is audible at the flying bird location as shown in Fig. 2C); and a controller configured to: cause, based on the first light that has been emitted, the first sound source to output the first sound (“timing triggers” discussed in [0020] is related to video/visual generating, see [0022] and [0052], e.g.), and cause, based on the second light that has been emitted, the second sound source to output the second sound. Claims 17 and 30 are broader than claim 1 and include each and every limitation of claim 1 as discussed before. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 18 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Voris. Regarding claims 18 and 25, Voris discloses a camera and video data ([0043]), but fails to explicitly show that responsive to the determination of the guest is interacting with the first image, cause the first sound source to output the first sound. The claimed limitation reads on a scenario when the user is playing a particular game that would generate sound after detecting the user interacting with an image. Voris teaches a general interactive game system which is able to generate both visual and audio effect depending on the multimedia content, including well known game that requires interacting with a visual image and then generating the corresponding sound effect, without generating any unexpected result. Thus, it would have been obvious to one of ordinary skill in the art to modify Voris by playing well known game, including one that requiring interacting with a visual image before generating a sound effect, because it is a matter of user preference. Claims 8, 27 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Norris et al. (US 20150139439 A1; hereafter Norris) in view of Frayne et al. (US 10012841 B1; hereafter Frayne). Regarding claims 8 and 27, Norris discloses a system, comprising: a first sound source adapted to output a first sound that is audible at the first location (location for exhibit A) and inaudible at the second location (location for exhibit B; see [0096]); and a second sound source adapted to output a second sound that is audible at the second location (location for exhibit B) and inaudible at the first location (location for exhibit A). Norris fails to show a light source. Norris teaches a general museum that includes several exhibitions. One skilled in the art would have expected that the exhibitions could include well known type, such as the ones generating by a light source accompanying sound effect. Frayne teaches an advanced display, generating visual by a light source, allowing user interaction detected by a camera (col. 8, lines 28-34) in addition to providing 2D and 3D image (col. 4, lines 22-49). No separate camera is needed. Frayne teaches a beam splitter and an retroreflector. Thus, it would have been obvious to one of ordinary skill in the art to modify Norris by utilizing the advanced display as taught in Frayne in order to enhancing the visual effect of an exhibition while capturing the user interaction and to limit the sound generation within an area of an exhibition generated by a display while not leaking the sound generation to another exhibition generated by the light source. Regarding claim 28, the beamsplitter in Frayne inherently reflects audio incident. Claims 2, 3, 8, 9, 13, 16 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Voris in view of Norris Regarding claims 2, 3, 8, 13 and 16, Voris fail to show ultrasonic transducer. Voris teaches mounting general speakers (20, 252, 254, 256) on the walls for providing sound effect. However, general speakers are restricted to be mounted on certain locations in order to provide the sound effect. In Voris, the general speakers are placed at locations for providing spatial sound effect. For example, for simulating a flying bird flying from left to right, one skilled in the art would have expected that left and right speakers are required to be mounted at specific and distinct locations relative to each. Norris teaches ultrasonic transducer that is not as limited as the general speakers. The ultrasonic transducer could direct the beam to a specific area without requiring the ultrasonic transducer to be mounted at specific location. Furthermore, the ultrasonic transducer could general sound that is focused for a specific area. No spillage of the sound based on the ultrasound control. Such characteristic would benefit the user who would prefer privacy. Thus, it would have been obvious to one of ordinary skill in the art to modify Voris in view of Norris by replacing the general speakers with ultrasonic transducer in order to relax on the speaker position placement and providing sound generating in a limited area corresponding to the image. Claims 9 and 29 correspond to claim 25 discussed above. Claims 8, 20-23 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Voris in view of Frayne. Regarding claims 21-23 and 25, Voris fails to show beamsplitter and a reflective element. Voris teaches a general projector. Frayne teaches an advanced display allowing user interaction detected by a camera (col. 8, lines 28-34) in addition to providing 2D and 3D image (col. 4, lines 22-49). No separate camera is needed as stated in Voris. Thus, it would have been obvious to one of ordinary skill in the art to modify Voris by replacing the general projector with the advanced display as taught in Frayne in order to enhancing the visual effect while capturing the user interaction. Regarding claim 20, Voris fails to show the controller selecting first or second sound to be output to the corresponding area. With the combination of Voris and Frayne as discussed above, the image is generated based on the detected user presence/action (e.g., Fig. 24) or the required perception of the image position (col. 4, lines 22-49). That is, either the first image or the second image would be presented based on the detected data. Thus, it would have been obvious to one of ordinary skill in the art to modify Voris and Frayne by configuring the controller to control sound generation based on the detected user presence/action or the required image position. Claims 24 and 26are rejected under 35 U.S.C. 103 as being unpatentable over Voris in view of Hu (US 20150193000 A1). Regarding claims 24 and 26, Voris teaches a camera, but fails to show light sensors. However, Voris teaches various functionally equivalent sensors for detecting user motion ([0021]). One skilled in the art would have expected that other well known sensors, including light sensors, could be used for detecting user action without generating any unexpected result. Hu teaches detecting user’s interactive motion by utilizing camera and a light sensor ([0017]). Hu teaches user’s gesture in a small area (keyboard area, e.g.). Although Hu fails to explicitly teaches light sensors, plural light sensors could accurately detect user’s motion in a larger area. Thus, it would have been obvious to one of ordinary skill in the art to modify Voris by implementing light sensors in addition to camera in order to accurately detecting user’s position and action in a large environment. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PING LEE whose telephone number is (571)272-7522. The examiner can normally be reached Monday-Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached at 571-272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PING LEE/Primary Examiner, Art Unit 2695
Read full office action

Prosecution Timeline

Jul 17, 2024
Application Filed
Mar 18, 2025
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12581263
METHOD FOR MANAGING AN AUDIO STREAM USING AN IMAGE ACQUISITION DEVICE AND ASSOCIATED DECODER EQUIPMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12548542
ACTIVE NOISE CANCELLER DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12542123
MASK NON-LINEAR PROCESSOR FOR ACOUSTIC ECHO CANCELLATION
2y 5m to grant Granted Feb 03, 2026
Patent 12543002
Headset Audio
2y 5m to grant Granted Feb 03, 2026
Patent 12519438
SYSTEM AND METHOD FOR AUTOMATIC ADJUSTMENT OF REFERENCE GAIN
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
94%
With Interview (+28.8%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 692 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month