Prosecution Insights
Last updated: April 19, 2026
Application No. 18/778,853

SYSTEMS AND METHODS FOR IDENTIFYING A LOCATION OF A SOUND SOURCE

Non-Final OA §102
Filed
Jul 19, 2024
Examiner
GAUTHIER, GERALD
Art Unit
2692
Tech Center
2600 — Communications
Assignee
Sony Interactive Entertainment Inc.
OA Round
1 (Non-Final)
91%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
98%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
1630 granted / 1791 resolved
+29.0% vs TC avg
Moderate +6% lift
Without
With
+6.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
17 currently pending
Career history
1808
Total Applications
across all art units

Statute-Specific Performance

§101
9.5%
-30.5% vs TC avg
§103
30.9%
-9.1% vs TC avg
§102
29.3%
-10.7% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1791 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-21 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Obana et al. (US 2015/0348378 A1). As to claim 1, Obana discloses a method for identifying a location of a sound source [Abstract], comprising: determining whether a predetermined condition regarding the sound source in a game is achieved during a play of the game [“The information processing apparatus executes a pre-installed program (a game program) regarding the sound output during the game which is connected to the information process.” Paragraphs 0065 and 0066]; modifying first audio data generated based on a first sound received from a player in response to determining that the predetermined condition is achieved, wherein the first audio data is modified in a three-dimensional audio space to output first modified audio data, wherein the player controls the sound source in the game [“The control section updates (modifies) the sound source type data using data representing the type of sound and sets the sound localization position in 3D space and output from the sound output apparatus the update audio.” Paragraphs 0100 and 0125]; and providing the first modified audio data via a computer network to a client device to output a first modified sound [“The system output the update sound to execute information the information processing between a stationary apparatus via a computer network and a handheld game apparatus.” Paragraphs 0100 and 0132]. As to claim 2, Obana discloses the method of claim 1, wherein said modifying the first audio data includes changing a location of output of the first audio data as the first sound [“The user perceives the vibration source placed at a position and shifted (changed) to the sound source localization position. The examiner chooses this limitation because of the simple or.” Paragraph 0128]. As to claim 3, Obana discloses the method of claim 1, comprising modifying second audio data to be output as a second sound from a virtual object in the same virtual scene as that the sound source in response to determining that the predetermined condition is achieved [“The control section updates (modifies) the sound source type data using data representing the type of sound and sets the sound localization position in 3D space and output from the sound output apparatus the update audio.” Paragraphs 0100 and 0125]. As to claim 4, Obana discloses the method of claim 3, comprising: determining whether the virtual object is within a predetermined distance from the sound source, wherein the second audio data to be output as the second sound from the virtual object is modified upon determining that the virtual object is within the predetermined distance, wherein the second audio data to be output as the second sound from the virtual object is modified to generate second modified audio data [“The control section updates (modifies) the sound source type data using data representing the type of sound and sets the sound localization position in 3D space and output from the sound output apparatus the update audio.” Paragraphs 0100 and 0125]. As to claim 5, Obana discloses the method of claim 3, wherein the virtual object provides a background to the sound source [“The information processing apparatus impart to the user vibrations around a wideband of frequencies as background to the sound source.” Paragraph 0093]. As to claim 6, Obana discloses the method of claim 3, wherein said modifying the second audio data to be output as the second sound from the virtual object includes reducing an amplitude of the second audio data to be output as the second sound from the virtual object to decrease an amount of the second sound to be output from the virtual object [“The control is performed so the amplitude of the vibration is imparted by the left actuator so the left amplitude of the output is smaller.” Paragraph 0089]. As to claim 7, Obana discloses the method of claim 1, comprising: determining whether a game context of a virtual scene in which the sound source is displayed meets a predetermined criteria, wherein said providing the first modified audio data occurs after a delay in response to determining that the game context meets the predetermined criteria [“The control session determines whether or not the game is to be ended depends of the satisfaction of the condition (predetermined criteria) and the control session will end the game process.” Paragraph 0129]. As to claim 8, Obana discloses the method of claim 1, comprising: modifying visual representation data identifying a location of the sound source in response to determining that predetermined condition is achieved, wherein the visual representation data is modified to output modified visual representation data [“The visual representation and the localization position of the sounds provide a realistic experience to the user.” Paragraph 0097]; and providing the modified visual representation data via the computer network to the client device for display of a modified visual representation [“The system output the update sound and the visual sensation to execute information the information processing between a stationary apparatus via a computer network and a handheld game apparatus.” Paragraphs 0098, 0100, 0129 and 0132]. As to claim 9, Obana discloses the method of claim 8, comprising: determining whether a game context of a virtual scene in which the sound source is displayed meets a predetermined criteria, wherein said providing the modified visual representation data occurs after a delay in response to determining that the game context meets the predetermined criteria [“The control session determines whether or not the game is to be ended depends of the satisfaction of the condition (predetermined criteria) and the control session will end the game process.” Paragraph 0129]. As to claim 10, Obana discloses a server system {FIG. 2] for identifying a location of a sound source, comprising: a processor [Control section 31 on FIG. 2] configured to: determine whether a predetermined condition regarding the sound source in a game is achieved during a play of the game [“The information processing apparatus executes a pre-installed program (a game program) regarding the sound output during the game which is connected to the information process.” Paragraphs 0065 and 0066]; modify first audio data generated based on a first sound received from a first player in response to determining that the predetermined condition is achieved, wherein the first audio data is modified in a three-dimensional audio space to output first modified audio data [“The control section updates (modifies) the sound source type data using data representing the type of sound and sets the sound localization position in 3D space and output from the sound output apparatus the update audio.” Paragraphs 0100 and 0125]; and provide the first modified audio data via a computer network to a client device to output a first modified sound [“The system output the update sound to execute information the information processing between a stationary apparatus via a computer network and a handheld game apparatus.” Paragraphs 0100 and 0132]; and a memory device coupled to the processor [Storage section 32 on FIG. 2]. As to claim 11, Obana discloses the server system of claim 10, wherein to modify the first audio data, the processor is configured to change a location of output of the first audio data as the first sound [“The user perceives the vibration source placed at a position and shifted (changed) to the sound source localization position. The examiner chooses this limitation because of the simple or.” Paragraph 0128] As to claim 12, Obana discloses the server system of claim 10, wherein the processor is configured to modify second audio data to be output as a second sound from a virtual object in the same virtual scene as that the sound source in response to determining that the predetermined condition is achieved [“The control section updates (modifies) the sound source type data using data representing the type of sound and sets the sound localization position in 3D space and output from the sound output apparatus the update audio.” Paragraphs 0100 and 0125]. As to claim 13, Obana discloses the server system of claim 12, wherein the processor is configured to: determine whether the virtual object is within a predetermined distance from the sound source in the virtual scene, wherein the second audio data to be output as the second sound from the virtual object is modified upon determining that the virtual object is within the predetermined distance, wherein the second audio data to be output as the second sound from the virtual object is modified to generate second modified audio data[“The control section updates (modifies) the sound source type data using data representing the type of sound and sets the sound localization position in 3D space and output from the sound output apparatus the update audio.” Paragraphs 0100 and 0125]. As to claim 14, Obana discloses the server system of claim 12, wherein the virtual object provides a background to the sound source [“The information processing apparatus impart to the user vibrations around a wideband of frequencies as background to the sound source.” Paragraph 0093]. As to claim 15, Obana discloses the server system of claim 12, wherein to modify the second audio data to be output as the second sound from the virtual object, the processor is configured to reduce an amplitude of the second audio data to be output as the second sound from the virtual object to decrease an amount of the second sound to be output from the virtual object [“The control is performed so the amplitude of the vibration is imparted by the left actuator so the left amplitude of the output is smaller.” Paragraph 0089]. As to claim 16, Obana discloses the server system of claim 10, wherein the processor is configured to: determine whether a game context of a virtual scene in which the sound source is displayed meets a predetermined criteria, wherein the first modified audio data is provided after a delay in response to determining that the game context meets the predetermined criteria [“The control session determines whether or not the game is to be ended depends of the satisfaction of the condition (predetermined criteria) and the control session will end the game process.” Paragraph 0129]. As to claim 17, Obana discloses the server system of claim 10, wherein the processor is configured to: modify visual representation data identifying a location of the sound source in response to determining that the predetermined condition is achieved, wherein the visual representation data is modified to output modified visual representation data [“The visual representation and the localization position of the sounds provide a realistic experience to the user.” Paragraph 0097]; and provide the modified visual representation data via the computer network to the client device for display of a modified visual representation [“The system output the update sound and the visual sensation to execute information the information processing between a stationary apparatus via a computer network and a handheld game apparatus.” Paragraphs 0098, 0100, 0129 and 0132]. As to claim 18, Obana discloses the server system of claim 17, wherein the processor is configured to: determine whether a game context of a virtual scene in which the sound source is displayed meets a predetermined criteria, wherein the modified visual representation data is provided after a delay in response to determining that the game context meets the predetermined criteria [“The control session determines whether or not the game is to be ended depends of the satisfaction of the condition (predetermined criteria) and the control session will end the game process.” Paragraph 0129]. As to claim 19, Obana discloses a non-transitory computer readable medium containing program instructions for identifying a location of a sound source [Paragraph 0135], wherein execution of the program instructions by one or more processors of a computer system causes the one or more processors to carry out operations comprising: determining whether a predetermined condition regarding the sound source in a game is achieved during a play of the game [“The information processing apparatus executes a pre-installed program (a game program) regarding the sound output during the game which is connected to the information process.” Paragraphs 0065 and 0066]; modifying first audio data generated based on a first sound received from a player in response to determining that the predetermined condition is achieved, wherein the first audio data is modified in a three-dimensional audio space to output first modified audio data, wherein the player controls the sound source in the game [“The information processing apparatus executes a pre-installed program (a game program) regarding the sound output during the game which is connected to the information process.” Paragraphs 0065 and 0066]; and providing the first modified audio data via a computer network to a client device to output a first modified sound [“The system output the update sound to execute information the information processing between a stationary apparatus via a computer network and a handheld game apparatus.” Paragraphs 0100 and 0132]. As to claim 20, Obana discloses the non-transitory computer readable medium of claim 19, wherein the operations include modifying second audio data to be output as a second sound from a virtual object associated with the sound source in response to determining that the predetermined condition is achieved [“The control section updates (modifies) the sound source type data using data representing the type of sound and sets the sound localization position in 3D space and output from the sound output apparatus the update audio.” Paragraphs 0100 and 0125]. As to claim 21, Obana discloses the non-transitory computer readable medium of claim 20, wherein the operations include determining whether the virtual object is within a predetermined distance from the sound source in a virtual scene, wherein the second audio data to be output as the second sound from the virtual object is modified upon determining that the virtual object is within the predetermined distance, wherein the second audio data to be output as the second sound from the virtual object is modified to generate second modified audio data [“The control section updates (modifies) the sound source type data using data representing the type of sound and sets the sound localization position in 3D space and output from the sound output apparatus the update audio.” Paragraphs 0100 and 0125]. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892 form. Stafford et al. (US 2022/0362680 A1) discloses a method further includes training a model using the image data and the input data to generate an inference of communication between the first user and the second user. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GERALD GAUTHIER whose telephone number is (571)272-7539. The examiner can normally be reached 8:00 AM to 4:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, CAROLYN R EDWARDS can be reached at (571) 270-7136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GERALD GAUTHIER/Primary Examiner, Art Unit 2692 January 27, 2026
Read full office action

Prosecution Timeline

Jul 19, 2024
Application Filed
Jan 27, 2026
Non-Final Rejection — §102
Feb 27, 2026
Interview Requested
Mar 10, 2026
Applicant Interview (Telephonic)
Mar 10, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604148
AUDIO PROCESSING USING EAR-WEARABLE DEVICE AND WEARABLE VISION DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12602197
CONFIGURATION OF PLATFORM APPLICATION WITH AUDIO PROFILE OF A USER
2y 5m to grant Granted Apr 14, 2026
Patent 12596522
ARTIFICIAL REALITY BASED DJ SYSTEM, METHOD AND COMPUTER PROGRAM IMPLEMENTING A SCRATCHING OPERATION OR A PLAYBACK CONTROL OPERATION
2y 5m to grant Granted Apr 07, 2026
Patent 12597435
SIGNAL PROCESSING APPARATUS AND SIGNAL PROCESSING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12598411
HEARING DEVICE COMPRISING A PARTITION
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
91%
Grant Probability
98%
With Interview (+6.5%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 1791 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month