DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/16/2026 has been entered.
Response to Arguments
Applicant’s arguments filed 01/16/2026 with respect to claim(s) 1-4, 6-14, 16-24 have been considered but are moot because the new ground of rejection does not rely on reference MacAuley applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Regarding the argument that reference MacAuley does not suggest the limitation “wherein the first border indicates at least the direction of the audio source included in a plurality of audio sources “, the examiner argues that he relies on reference Hwan in Para [0112] in the current rejection to read on the limitation “wherein the first border indicates at least the direction of the audio source included in a plurality of audio sources”.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 6-14, 16-18, 20-24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Collins (US 2011/0283865 A1) in view of Liang et al (US 2023/0201717 A1) and further in view of Aoki et al (JP 2002-315966 A) and further in view of Hwan et al (KR20200027909A).
Regarding claim 1, Collins discloses a method for communicating audio data, comprising: generating a set of identifiers for an audio source in a scene of a video game to identify an emotion output from the audio source and a direction of the audio source (Collin; Para [0039]), wherein the direction is identified relative to a location of a virtual object being controlled in the scene by a user (Collin; Para [0061]); but do not expressly disclose sending data to a client device to generate a display region in the scene of the video game, wherein the display region includes the set of identifiers that graphically indicate the emotion output from the audio source and the direction of the audio source relative to the virtual object; and presenting, based on at least the emotion, a first border with the scene, wherein the first border indicates at least the direction of the audio source included in a plurality of audio sources. However, in the same field of endeavor, Liang et al disclose a method comprising sending data to a client device to generate a display region in the scene of the video game (Liang et al; Para [0029]; Para [0033] game application on a remote server; client device receives data from remote server), wherein the display region includes the set of identifiers that graphically indicate and the direction of the audio source relative to the virtual object (Liang et al; Para [0029] identifier 290D shows direction of audio source relative to the player in the display). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the identifier taught by Liang as visual aid for the audio attributes taught by Collins. The motivation to do so would have been to provide subtitles or closed captions for computer and video games in a more flexible manner (Liang et al; Para [0003]). Moreover, in the same field of endeavor, Aoki et al disclose a method comprising sending data to a client device to generate a display region in the scene of the video game (Aoki et al; Fig 7; Para [0025][0030][0041]), wherein the display region includes the set of identifiers that graphically indicate the emotion output from the audio source (Aoki et al; Fig 7; Para [0025][0030][0041]) and presenting, based on at least the emotion, a first border with the scene (Aoki et al; Para [0043]; windows showing emotion state). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the emotion presentation taught by Aoki as visual aid for the emotion attributes taught by Collins. The motivation to do so would have been to greatly reduce work burden on the player in a game (Aoki et al; Para [0046]). Furthermore, in the same field of endeavor, Hwan et al disclose a method further wherein the first border indicates at least the direction of the audio source included in a plurality of audio sources (Hwan et al; Fig 9; Para [0083][0112]; border 931 indicates the direction of the sound source included in a plurality of audio sources in the environment). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the identifier taught by Hwan as visual aid for the audio attributes taught by Collins. The motivation to do so would have been to enjoy the game regardless of surrounding conditions and hearing ability (Hwan et al; Para [0034]).
Regarding claim 2, Collins et al in view of Liang et al and further in view of Aoki et al and further in view of Hwan et al disclose the method of claim 1, but do not expressly disclose wherein the one or more virtual objects includes one or more video game characters. However, in the same field of endeavor, Aoki et al disclose a method wherein the one or more virtual objects includes one or more video game characters (Aoki et al; Fig 7; Para [0025][0030][0041]). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the emotion presentation taught by Aoki as visual aid for the emotion attributes taught by Collins. The motivation to do so would have been to greatly reduce work burden on the player in a game (Aoki et al; Para [0046]).
Regarding claim 3, Collins et al in view of Liang et al and further in view of Aoki et al and further in view of Hwan et al disclose the method of claim 1, but do not expressly disclose further comprising: accessing at least one indicator of at least one amplitude of at least one of a plurality of sounds to be output with the scene; sending the at least one indicator of the at least one amplitude to display the at least one indicator of the at least one amplitude with the output of the scene. However, in the same field of endeavor, Hwan et al disclose a method further comprising: accessing at least one indicator of at least one amplitude of at least one of a plurality of sounds to be output with the scene (Hwan et al; Fig 9; Para [0112]-[0117]; size of audio interpreted as audio amplitude); and sending the at least one indicator of the at least one amplitude to display the at least one indicator of the at least one amplitude with the output of the scene (Hwan et al; Fig 9; Para [0112]- [0117]; size of audio interpreted as audio amplitude). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the identifier taught by Hwan as visual aid for the audio attributes taught by Collins. The motivation to do so would have been to enjoy the game regardless of surrounding conditions and hearing ability (Hwan et al; Para [0034]).
Regarding claim 6, Collins et al in view of Liang et al and further in view of Aoki et al and further in view of Hwan et al disclose the method of claim 1, but do not expressly disclose wherein the first border further indicates at least one of: an emotion of the audio source, an identity of the audio source, or a volume of the audio source. However, in the same field of endeavor, Hwan et al disclose a method wherein the first border further indicates at least one of: an emotion of the audio source, an identity of the audio source, or a volume of the audio source (Hwan et al; Fig 9; Para [0112]; first border indicates size of audio source interpreted as volume of audio source). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the identifier taught by Hwan as visual aid for the audio attributes taught by Collins. The motivation to do so would have been to enjoy the game regardless of surrounding conditions and hearing ability (Hwan et al; Para [0034]).
Regarding claim 7, Collins et al in view of Liang et al and further in view of Aoki et al and further in view of Hwan et al disclose the method of claim 1, wherein the scene is to be displayed with output of a plurality of sounds, wherein the plurality of sounds include a sound of a character, a sound of music, an ambient sound, and a sudden sound (Collins; Para [0045]).
Regarding claim 8, Collins et al in view of Liang et al and further in view of Aoki et al and further in view of Hwan et al disclose the method of claim 1, but do not expressly disclose further comprising: generating a plurality of image frames having the set of identifiers, wherein said sending the data includes sending the plurality of image frames via a computer network to the client device for display of the plurality of image frames. However, in the same field of endeavor, Liang et al disclose a method further comprising: generating a plurality of image frames having the set of identifiers (Liang et al; Para [0029]; Para [0033] game application on a remote server; client device receives data from remote server), wherein said sending the data includes sending the plurality of image frames via a computer network to the client device for display of the plurality of image frames (Liang et al; Para [0029] server transmits images data to client device). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the identifier taught by Liang as visual aid for the audio attributes taught by Collins. The motivation to do so would have been to provide subtitles or closed captions for computer and video games in a more flexible manner (Liang et al; Para [0003]).
Regarding claim 9, Collins et al in view of Liang et al and further in view of Aoki et al and further in view of Hwan et al disclose the method of claim 1, but do not expressly disclose wherein said sending the data includes sending the set of identifiers is via a computer network to the client device, wherein the client device is configured to generate a plurality of images having the set of identifiers for display of the plurality of images on a display device of the client device. However, in the same field of endeavor, Liang et al disclose a method wherein said sending the data includes sending the set of identifiers is via a computer network to the client device (Liang et al; Para [0029]; Para [0033] game application on a remote server; client device receives data from remote server), wherein the client device is configured to generate a plurality of images having the set of identifiers for display of the plurality of images on a display device of the client device (Liang et al; Para [0029] server transmits images data to client device). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the identifier taught by Liang as visual aid for the audio attributes taught by Collins. The motivation to do so would have been to provide subtitles or closed captions for computer and video games in a more flexible manner (Liang et al; Para [0003]).
Regarding claim 10, Collins et al in view of Liang et al and further in view of Aoki et al and further in view of Hwan et al disclose the method of claim 1, but do not expressly disclose further comprising; determining whether a plurality of volumes of a plurality of sounds are below a pre-determined threshold, and accessing the set of identifiers upon determining that the plurality of volumes are below the pre- determined threshold. However, in the same field of endeavor, Hwan et al disclose a method further comprising; determining whether a plurality of volumes of a plurality of sounds are below a pre-determined threshold (Hwan et al; Para [0158][0112]-[0117]; zero sound is interpreted as sound below threshold), accessing the set of identifiers upon determining that the plurality of volumes are below the pre-determined threshold (Hwan et al; Para [0158][0112]- [0117]). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the identifier taught by Hwan as visual aid for the audio attributes taught by Collins. The motivation to do so would have been to enjoy the game regardless of surrounding conditions and hearing ability (Hwan et al; Para [0034]).
Regarding claim 11, Collins discloses a server for communicating audio data (Collins et al; Para [0006]), comprising: one or more storage media storing first instructions; and one or more processors configured to execute the instructions to cause the server to (Collins et al; Para [0027][0064]): generate a set of identifiers for an audio source in a scene of a video game to identify an emotion output from the audio source and a direction of the audio source (Collin; Para [0039]), wherein the direction is identified relative to a location of a virtual object being controlled in the scene by a user (Collin; Para [(0061]); but do not expressly disclose send data to a client device to generate a display region in the scene of the video game, wherein the display region includes the set of identifiers that graphically indicate the emotion output from the audio source and the direction of the audio source relative to the virtual object; presenting, based on at least the emotion, a border with the scene; wherein the first border indicates at least the direction of the audio source included in a plurality of audio sources. However, in the same field of endeavor, Liang et al disclose a method comprising send data to a client device to generate a display region in the scene of the video game (Liang et al; Para [0029]; Para [0033] game application on a remote server; client device receives data from remote server), wherein the display region includes the set of identifiers that graphically indicate and the direction of the audio source relative to the virtual object (Liang et al; Para [0029] identifier 290D shows direction of audio source relative to the player in the display). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the identifier taught by Liang as visual aid for the audio attributes taught by Collins. The motivation to do so would have been to provide subtitles or closed captions for computer and video games in a more flexible manner (Liang et al; Para [0003]). Moreover, in the same field of endeavor, Aoki et al disclose a method comprising sending data to a client device to generate a display region in the scene of the video game (Aoki et al; Fig 7; Para [0025][0030][0041]), wherein the display region includes the set of identifiers that graphically indicate the emotion output from the audio source (Aoki et al; Fig 7; Para [0025][0030][0041]) and presenting, based on at least the emotion, a first border with the scene (Aoki et al; Para [0043]; windows showing emotion state). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the emotion presentation taught by Aoki as visual aid for the emotion attributes taught by Collins. The motivation to do so would have been to greatly reduce work burden on the player in a game (Aoki et al; Para [0046]). Furthermore, in the same field of endeavor, Hwan et al disclose a method further wherein the first border indicates at least the direction of the audio source included in a plurality of audio sources (Hwan et al; Fig 9; Para [0083][0112]; border 931 indicates the direction of the sound source included in a plurality of audio sources in the environment). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the identifier taught by Hwan as visual aid for the audio attributes taught by Collins. The motivation to do so would have been to enjoy the game regardless of surrounding conditions and hearing ability (Hwan et al; Para [0034]).
Regarding claim 12, Collins et al in view of Liang et al and further in view of Aoki et al and further in view of Hwan et al disclose the server of claim 11, but do not expressly disclose wherein the processor is configured to: determine that a plurality of volumes of a plurality of sounds are below a pre-determined threshold; access one of the set of identifiers based at least in part on the direction of occurrence of the one of the plurality of sounds upon determining that the plurality of volumes are below the pre-determined threshold. However, in the same field of endeavor, Hwan et al disclose a method comprising whether a plurality of volumes of a plurality of sounds are below a pre-determined threshold (Hwan et al; Para [0158][0112]-[0117]; zero sound is interpreted as sound below threshold), access one of the set of identifiers based at least in part on the direction of occurrence of the one of the plurality of sounds upon determining that the plurality of volumes are below the pre-determined threshold (Hwan et al; Para [0158][0112]-[0117]). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the identifier taught by Hwan as visual aid for the audio attributes taught by Collins. The motivation to do so would have been to enjoy the game regardless of surrounding conditions and hearing ability (Hwan et al; Para [0034]).
Regarding claim 13, Collins et al in view of Liang et al and further in view of Aoki et al and further in view of Hwan et al disclose the server of claim 11, but do not expressly disclose wherein the processor is configured to: determine that a plurality of volumes of a plurality of sounds are below a pre- determined threshold; access at least one indicator at least one amplitude of the at least one of the plurality of sounds upon determining that the plurality of volumes are below the pre-determined threshold; send the at least one indicator of the at least one amplitude to display the at least one indicator of the at least one amplitude with an output of the scene. However, in the same field of endeavor, Hwan et al disclose a method further comprising: the processor is configured to: determine that a plurality of volumes of a plurality of sounds are below a pre- determined threshold (Hwan et al; Fig 9; Para [0112]-[0117] accessing at least one indicator of at least one amplitude of at least one of a plurality of sounds to be output with the scene (Hwan et al; Fig 9; Para [0112]-[0117]; size of audio interpreted as audio amplitude); sending the at least one indicator of the at least one amplitude to display the at least one indicator of the at least one amplitude with the output of the scene (Hwan et al; Fig 9; Para [0112]-[0117]; size of audio interpreted as audio amplitude). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the identifier taught by Hwan as visual aid for the audio attributes taught by Collins. The motivation to do so would have been to enjoy the game regardless of surrounding conditions and hearing ability (Hwan et al; Para [0034]).
Regarding claim 14, Collins et al in view of Liang et al and further in view of Aoki et al and further in view of Hwan et al disclose the server of claim 11, but do not expressly disclose wherein the processor is configured to: determine that a plurality of volumes of a plurality of sounds are below a pre-determined threshold; generate display data for displaying at least one border based on at least one of the plurality of sounds upon determining that the plurality of volumes are below the pre- determined threshold; and send the display data to display the at least one border with an output of the scene. However, in the same field of endeavor, Hwan et al disclose a device wherein the processor is configured to: determine that a plurality of volumes of a plurality of sounds are below a pre-determined threshold; generate display data for displaying at least one border based on at least one of the plurality of sounds upon determining that the plurality of volumes are below the pre-determined threshold (Hwan et al; Para [0158][0112]-[0117]; zero sound is interpreted as sound below threshold); and send the display data to display the at least one border with an output of the scene (Hwan et al; Para [0158][0112]-[0117]). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the identifier taught by Hwan as visual aid for the audio attributes taught by Collins. The motivation to do so would have been to enjoy the game regardless of surrounding conditions and hearing ability (Hwan et al; Para [0034]).
Regarding claim 16, Collins et al in view of Liang et al and further in view of Aoki et al and further in view of Hwan et al disclose the server of claim 11, but do not expressly disclose wherein at least one of a thickness of the first border, a color of the first border, or a pattern of the first border indicates the direction. However, in the same field of endeavor, Hwan et al disclose a method wherein at least one of a thickness of the first border, a color of the first border, or a pattern of the first border indicates the direction (Hwan et al; Fig 9; Para [0083][0112];). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the identifier taught by Hwan as visual aid for the audio attributes taught by Collins. The motivation to do so would have been to enjoy the game regardless of surrounding conditions and hearing ability (Hwan et al; Para [0034]).
Regarding claim 17, Collins et al in view of Liang et al and further in view of Aoki et al and further in view of Hwan et al disclose the server of claim 11, wherein the scene is to be displayed with output of a plurality of sounds, wherein the plurality of sounds include a sound of a character, a sound of music, an ambient sound, and a sudden sound (Collins; Para [0045]).
Regarding claim 18, Collins et al in view of Liang et al and further in view of Aoki et al and further in view of Hwan et al disclose the server of claim 11, but do not expressly disclose wherein the processor is configured to generate a plurality of image frames having the set of identifiers, wherein to send the data, the processor is configured to send the plurality of image frames via a computer network to the client device for display of the plurality of image frames. However, in the same field of endeavor, Liang et al disclose a device wherein the processor is configured to generate a plurality of image frames having the set of identifiers (Liang et al; Para [0029]; Para [0033] game application on a remote server; client device receives data from remote server), wherein to send the data, the processor is configured to send the plurality of image frames via a computer network to the client device for display of the plurality of image frames (Liang et al; Para [0029] server transmits images data to client device). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the identifier taught by Liang as visual aid for the audio attributes taught by Collins. The motivation to do so would have been to provide subtitles or closed captions for computer and video games in a more flexible manner (Liang et al; Para [0003]).
Regarding claim 20, Collins et al in view of Liang et al and further in view of Aoki et al and further in view of Hwan et al disclose the server of claim 11, but do not expressly disclose wherein the processor is configured to; determine that a plurality of volumes of a plurality of sounds are below a pre- determined threshold; and access the set of identifiers upon determining that the plurality of volumes of the plurality of sounds are below the pre- determined threshold. However, in the same field of endeavor, Hwan et al disclose a method comprising whether a plurality of volumes of a plurality of sounds are below a pre-determined threshold (Hwan et al; Para [0158][0112]-[0117]; zero sound is interpreted as sound below threshold), accessing one of the set of identifiers of the direction of occurrence of one of the plurality of sounds upon determining that the plurality of volumes are below the pre-determined threshold (Hwan et al; Para [0158][0112]-[0117]). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the identifier taught by Hwan as visual aid for the audio attributes taught by Collins. The motivation to do so would have been to enjoy the game regardless of surrounding conditions and hearing ability (Hwan et al; Para [0034]).
Regarding claim 21, Collins discloses a client device for communicating audio data (Collins et al; Para [0030]), comprising: one or more storage media storing instructions: and one or more processors configured to execute the instructions to cause the client device to (Collins et al; Para [0027][0064]): access a set of identifiers for an audio source in a scene of a video game to identify an emotion output from the audio source and a direction of the audio source (Collin; Para [0039]), wherein the direction is identified relative to a location of a virtual object being controlled in the scene by a user (Collin; Para [0061]); but do not expressly disclose and provide data to generate a display region in the scene of the video game, wherein the display region includes the set of identifiers that graphically indicate the emotion output from the audio source and the direction of the audio source relative to the virtual object; and presenting, based on at least the emotion, a border with the scene; wherein the first border indicates at least the direction of the audio source included in a plurality of audio sources. However, in the same field of endeavor, Liang et al disclose a method comprising provide data to generate a display region in the scene of the video game (Liang et al; Para [0029]; Para [0033] game application on a remote server; client device receives data from remote server), wherein the display region includes the set of identifiers that graphically indicate and the direction of the audio source relative to the virtual object (Liang et al; Para [0029] identifier 290D shows direction of audio source relative to the player in the display). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the identifier taught by Liang as visual aid for the audio attributes taught by Collins. The motivation to do so would have been to provide subtitles or closed captions for computer and video games in a more flexible manner (Liang et al; Para [0003]). Moreover, in the same field of endeavor, Aoki et al disclose a method comprising sending data to a client device to generate a display region in the scene of the video game (Aoki et al; Fig 7; Para [0025][0030][0041]), wherein the display region includes the set of identifiers that graphically indicate the emotion output from the audio source (Aoki et al; Fig 7; Para [0025][0030][0041]) and presenting, based on at least the emotion, a first border with the scene (Aoki et al; Para [0043]; windows showing emotion state). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the emotion presentation taught by Aoki as visual aid for the emotion attributes taught by Collins. The motivation to do so would have been to greatly reduce work burden on the player in a game (Aoki et al; Para [0046]). Furthermore, in the same field of endeavor, Hwan et al disclose a method further wherein the first border indicates at least the direction of the audio source included in a plurality of audio sources (Hwan et al; Fig 9; Para [0083][0112]; border 931 indicates the direction of the sound source included in a plurality of audio sources in the environment). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the identifier taught by Hwan as visual aid for the audio attributes taught by Collins. The motivation to do so would have been to enjoy the game regardless of surrounding conditions and hearing ability (Hwan et al; Para [0034]).
Regarding claim 22, Collins et al in view of Liang et al and further in view of Aoki et al and further in view of Hwan et al disclose the client device of claim 21, but do not expressly disclose wherein the instructions further cause the one or more processors to cause the client device to: determine whether a plurality of volumes of a plurality of sounds are below a pre-determined threshold; and access one of the set of identifiers based at least in part on the direction of occurrence of one of the plurality of sounds upon determining that the plurality of volumes are below the pre-determined threshold. However, in the same field of endeavor, Hwan et al disclose a device wherein the instructions further cause the one or more processors to cause the client device to: determine whether a plurality of volumes of a plurality of sounds are below a pre- determined threshold (Hwan et al; Para [0158][0112]-[0117]; zero sound is interpreted as sound below threshold); and access one of the set of identifiers based at least in part on the direction of occurrence of one of the plurality of sounds upon determining that the plurality of volumes are below the pre-determined threshold (Hwan et al; Para [0158][0112]-[0117]). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the identifier taught by Hwan as visual aid for the audio attributes taught by Collins. The motivation to do so would have been to enjoy the game regardless of surrounding conditions and hearing ability (Hwan et al; Para [0034]).
Regarding claim 23, Collins et al in view of Liang et al and further in view of Aoki et al and further in view of Hwan et al disclose the client device of claim 21, but do not expressly disclose wherein the instructions further cause the one or more processors to cause the client device to: determine whether a plurality of volumes of a plurality of sounds are below a pre-determined threshold; access at least one indicator at least one amplitude of the at least one of the plurality of sounds upon determining that the plurality of volumes are below the pre-determined threshold; provide the at least one indicator of the at least one amplitude to display the at least one indicator of the at least one amplitude with an output of the scene. However, in the same field of endeavor, Hwan et al disclose a method further comprising: wherein the instructions further cause the one or more processors to cause the client device to: determine that a plurality of volumes of a plurality of sounds are below a pre- determined threshold (Hwan et al; Fig 9; Para [0112]-[0117] accessing at least one indicator of at least one amplitude of at least one of a plurality of sounds to be output with the scene (Hwan et al; Fig 9; Para [0112]-[0117]; size of audio interpreted as audio amplitude); sending the at least one indicator of the at least one amplitude to display the at least one indicator of the at least one amplitude with the output of the scene (Hwan et al; Fig 9; Para [0112]- [0117]; size of audio interpreted as audio amplitude). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the identifier taught by Hwan as visual aid for the audio attributes taught by Collins. The motivation to do so would have been to enjoy the game regardless of surrounding conditions and hearing ability (Hwan et al; Para [0034]).
Regarding claim 24, Collins et al in view of Liang et al and further in view of Aoki et al and further in view of Hwan et al disclose the client device of claim 21, but do not expressly disclose wherein the instructions further cause the one or more processors to cause the client device to determine that a plurality of volumes of a plurality of sounds are below a pre-determined threshold, wherein the display region is displayed upon determining that the plurality of volumes of the plurality of sounds are below the pre-determined threshold. However, in the same field of endeavor, Hwan et al disclose a device wherein the instructions further cause the one or more processors to cause the client device to determine that a plurality of volumes of a plurality of sounds are below a pre-determined threshold (Hwan et al; Para [0158][0112]-[0117]; zero sound is interpreted as sound below threshold); wherein the display region is displayed upon determining that the plurality of volumes of the plurality of sounds are below the pre-determined threshold (Hwan et al; Para [0158][0112]-[0117)). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the identifier taught by Hwan as visual aid for the audio attributes taught by Collins. The motivation to do so would have been to enjoy the game regardless of surrounding conditions and hearing ability (Hwan et al; Para [0034]).
Claim(s) 4, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Collins (US 2011/0283865 A1) in view of Liang et al (US 2023/0201717 A1) and further in view of Aoki et al (JP 2002-315966 A) and further in view of Hwan et al (KR20200027909A) and further in view of MacAuley et al (US 2006/0015560 A1).
Regarding claim 4, Collins et al in view of Liang et al and further in view of Aoki et al and further in view of Hwan et al disclose the method of claim 1, but do not expressly disclose further comprising: generating a second border, different from the first border, based on at least one of a plurality of sounds to be output with the scene; and displaying the second border with the output of the scene. However, in the same field of endeavor, MacAuley et al disclose a method further comprising: generating a second border, different from the first border, based on at least one of a plurality of sounds to be output with the scene (MacAuley et al; Fig 6; border 601); and displaying the second border with the output of the scene (MacAuley et al; Fig 6; border 601 displays sound output of plurality of sound sources). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the identifier taught by MacAuley as visual aid for the audio attributes taught by Collins. The motivation to do so would have been to improve processing speed and throughput (MacAuley et al; Para [0044]).
Regarding claim 19, Collins et al in view of Liang et al and further in view of Aoki et al and further in view of Hwan et al disclose the server of claim 11, but do not expressly disclose wherein the data is sent via a computer network to the client device, wherein the client device is configured to generate a plurality of images having the set of identifiers for display of the plurality of images on a display device. However, in the same field of endeavor, MacAuley et al disclose a device wherein the data is sent via a computer network to the client device (MacAuley et al; Para [0083]) wherein the display region having the set of identifiers is displayed with a display of the scene (MacAuley et al; Fig 6). It would have been obvious to one of the ordinary skills in the art before the effective filing date of the application to use the identifier taught by MacAuley as visual aid for the audio attributes taught by Collins. The motivation to do so would have been to improve processing speed and throughput (MacAuley et al; Para [0044]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KUASSI A GANMAVO whose telephone number is (571)270-5761. The examiner can normally be reached M-F 9 AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn Edwards can be reached at 5712707136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KUASSI A GANMAVO/Examiner, Art Unit 2692
/CAROLYN R EDWARDS/Supervisory Patent Examiner, Art Unit 2692