DETAILED ACTION
Applicant's Submission of a Response
Applicant’s submission of a response on 2/17/2026 has been received and fully considered. In the response, claims 1-3, 5-10, and 13-19 have been amended; claims 4, 11, and 12 have been canceled; and new claims 20-22 have been added. Therefore, claims 1-3, 5-10, and 13-22 are pending.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 6-9, 16, and 17-22 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2020/0306641 to Kolen in view of U.S. Patent Application Publication No. 2023/0376328 to Nagar.
With regard to claim 1, Kolen discloses a computer-implemented method comprising:
analyzing a stored media file to determine a content characteristic (e.g., see at least paragraph 36, that discusses “the system may optionally access metadata information associated with music being streamed by the user”);
analyzing a virtual environment in a current interactive session to determine a parameter associated with the content characteristic that corresponds to current activity within the virtual environment (e.g., see at least paragraphs 37 and 38 that discuss “style preference may be utilized by the system to generate music, for example based on contextual information occurring in the electronic game” or a “certain musical cue maybe selected based on the user reaching a particular progression point within the electronic game”); and
modifying the parameter of the virtual environment presented during the current interactive session based on at least in part on the content characteristic (e.g., see at least paragraph 47, that states as “the user manipulates the character within the region, the game information 114 may be updated to indicate triggers or contextual information associated with the manipulation. In this way, if the user encounters a particular boss, or enters a particular [room], the game information 114 may be updated reflect these encounters”; see also paragraph 50 for discussion of playing personalized music based on a user’s control of the game);
[claim 2] modifying the parameter of the virtual environment includes modifying the parameter based at least in part on a waveform associated with the content characteristic (e.g., see at least paragraphs 68 and 69 for discussion of creating audio waveforms; see also paragraph 72 that states “the audio waveform 144B may be utilized as an input to a different artificial neural network 224” and “Subsequent to training, the audio waveform 144B may be analyzed by the network 224 in view of style preference 212. The network 224 may then generate personalized music 102 in accordance with the style preference.”);
[claim 6] further comprising analyzing the stored media file to determine the content characteristics associated with the content characteristic (e.g., see at least paragraph 47, that states as “the user manipulates the character within the region, the game information 114 may be updated to indicate triggers or contextual information associated with the manipulation. In this way, if the user encounters a particular boss, or enters a particular [room], the game information 114 may be updated reflect these encounters”; see also paragraph 50 for discussion of playing personalized music based on a user’s control of the game);
[claim 7] further comprising analyzing an in-game scene within the virtual environment to identify a scene characteristic of the in-game scene ((e.g., see at least paragraph 47, that states as “the user manipulates the character within the region, the game information 114 may be updated to indicate triggers or contextual information associated with the manipulation. In this way, if the user encounters a particular boss, or enters a particular [room], the game information 114 may be updated reflect these encounters”; see also paragraph 50 for discussion of playing personalized music based on a user’s control of the game).
[claim 8] further comprising comparing the content characteristic regarding one or more portions of the stored media file to the scene characteristic of the in-game scene (e.g., see at least paragraph 47, that states as “the user manipulates the character within the region, the game information 114 may be updated to indicate triggers or contextual information associated with the manipulation. In this way, if the user encounters a particular boss, or enters a particular [room], the game information 114 may be updated reflect these encounters”; see also paragraph 50 for discussion of playing personalized music based on a user’s control of the game);
[claim 9] wherein modifying the parameters of the virtual environment includes modifying sound or music of the in-game scene based at least in part on the content characteristic of the one of the portions of the stored media file and the scene characteristics of the in-game scene (e.g., see at least paragraph 47, that states as “the user manipulates the character within the region, the game information 114 may be updated to indicate triggers or contextual information associated with the manipulation. In this way, if the user encounters a particular boss, or enters a particular [room], the game information 114 may be updated reflect these encounters”; see also paragraph 50 for discussion of playing personalized music based on a user’s control of the game);
[claim 16] further comprising receiving a user selection of a user profile of a second stored media file, wherein the user profile is associated with a second content characteristic; and modifying the parameter includes modifying the parameter based at least in part on the second content characteristic (e.g., see at least paragraph 52 that discloses creating a user profile associated with music personalization);
[claim 20] further comprising identifying that the content characteristic determined from the stored media file corresponds to the current activity within the virtual environment (e.g., see at least paragraphs 37 and 38 that discuss “style preference may be utilized by the system to generate music, for example based on contextual information occurring in the electronic game” or a “certain musical cue maybe selected based on the user reaching a particular progression point within the electronic game”);
[claim 21] wherein: analyzing the stored media file to determine the content characteristic includes analyzing the stored media file to determine a plurality of content characteristics (e.g., see at least paragraph 36, that discusses “the system may optionally access metadata information associated with music being streamed by the user”);
each content characteristic of the plurality of content characteristics is associated with a corresponding emotional characteristic (e.g., see at least paragraph 12 for discussion of indication of a “particular emotion or feeling to be achieved from the generated music”); the content characteristic is a first content characteristic of the plurality of content characteristics;
the first content characteristic is associated with a first emotional characteristic (e.g., see at least paragraph 12 for discussion of indication of a “particular emotion or feeling to be achieved from the generated music”);
analyzing the virtual environment includes identifying a second emotional characteristic associated with an in-game scene that is depicted in the current interactive session and that includes the character (e.g., see at least paragraph 38 that discusses different musical cues may be theme or emotion, wherein emotion is the first characteristic and theme is the second characteristic; alternatively, see paragraph 7 that describes more than one emotional response including “sadness, happiness, excitement, and so on”);
the in-game scene corresponds to a game that is different than the stored media file (e.g., see at least paragraph 43 for discussion of systems that “generate personalized music for different electronic games”); and
the computer-implemented method further comprises generating a determination that the first emotional characteristic corresponds to the second emotional characteristic, wherein modifying the parameter includes modifying at least one of music or scenery in the in-game scene to correspond to the first content characteristic based at least in part on the determination (e.g., see at least paragraphs 7 and 8 that discuss adjusting the music based on a combination of different emotional responses and styles); and
[claim 22] wherein modifying the parameter includes modifying the in-game scene to correspond to the first content characteristic based at least in part on the determination (e.g., see at least paragraph 10 that discusses “if this portion of music were to be adjusted in style to a second style, the users may have a much greater affinity, or emotional response, to in game events, actions, and so on”).
With regard to claim 1 and 6, Kolen discloses all of the recited features but is silent regarding modifying a voice of an NPC or virtual object.
Reasonably pertinent to the problem faced, Nagar teaches modification of a voice (e.g., see at least paragraphs 18 and 74 for discussion of mimicking a voice).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the current invention to modify Kolen with the voice modification as taught by Nagar in order to use a known technique to improve similar devices (methods, or products) in the same way. In this case, the voice modification helps provide a more interesting voice output that is more entertaining.
With regard to claims 3 and 18, Kolen discloses all of the recited features but is silent regarding the stored media file being sensor data.
Reasonably pertinent to the problem faced, Nagar teaches the use of sensor data for media files (e.g., see at least paragraphs 38, 59, and 77 that discuss user data collection via sensors).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the current invention to modify Kolen with the sensor data taught by Nagar in order to use a known technique to improve similar devices (methods, or products) in the same way. In this case, the sensor data helps provide automatic user data input that reduces the user efforts and accurately collects data.
Claims 17 and 19 are made obvious by Kolen in view of Nagar based on the same analysis set forth above for claim 1, which is similar in claim scope.
Claims 5 and 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Kolen in view of Nagar and further in view of U.S. Patent Application Publication No. 2020/0298125 to Stroud.
With regard to claims 5 and 13-15, Kolen discloses all of the recited features but is silent regarding storing user inputs and generating a custom tutorial with mapped button input sequences.
Reasonably pertinent to the problem faced, Stroud teaches storing user inputs (e.g., see at least paragraphs 8, 16, 17, and 51 for discussion of storing a sequence of button presses) and generating a custom tutorial with mapped button input sequences (e.g.., see at least Figs. 5Aand 5B; see also paragraphs 108-114 that discuses analyzing a user’s behavior and to help a user better understand nuances of the game play and assist the user to improve the game play of the game) including an illustration (e.g., see at least Fig. 4B, illustration shown in “Hint C”; see also paragraph 109 that shows an icon on the GUI).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the current invention to modify Kolen with the button sequence and tutorials as taught by Stroud in order to use a known technique to improve similar devices (methods, or products) in the same way. In this case, the storing user button inputs sequence and generating a tutorial allows a user to learn from more skilled players by tracking the skilled players inputs.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Kolen in view of Nagar and further in view of U.S. Patent Application Publication No. 2022/0319087 to Zhang.
With regard to claim 10, Kolen discloses all of the recited features but is silent regarding modifying the appearance of a virtual character.
Reasonably pertinent to the problem faced, Zhang teaches modifying the appearance of a virtual character (e.g., see at least 46 that describes using a virtual camera to modify the facial appearance of a virtual character).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the current invention to modify Kolen with the facial modification for virtual characters as taught by Zhang in order to use a known technique to improve similar devices (methods, or products) in the same way. In this case, the modifying game facial appearances provides for a more personalized gaming experience that increases player enjoyment.
Response to Arguments
Applicant's arguments filed 2/17/2026 have been fully considered but they are not persuasive.
On page 8, second paragraph, Applicant notes that claim 16 has been amended to address informalities. In response to Applicant’s amendment to claim 16, the previous claim objection is withdrawn.
On page 9, Applicant argues the combination of Kolen and Nagar by noting the timing of Nagar’s modification (prior to the activity, as opposed to during the current activity). The Examiner respectfully disagrees. As discussed during the interview prior to this response, the Examiner addressed this issue. That is, Kolen discloses all of the recited features with the exception of the type of data file. In Kolen, the data file is directed to music. Nagar is merely relied upon to teach the type of data file (voice data). Again, as discussed in the interview, the Examiner asserts that Applicant is arguing the references individually, instead of arguing the applied combination of references.
On page 10, under New Claims, Applicant again argues the timing of Nagar, rather than the type of data that Nagar is relied upon for teach. Applicant does not argue that the base reference, Kolen, fails to disclose the newly added timing of claim 20.
On pages 10-11, Applicant argues that Kolen fails to modifying in-game sceneary. Based on the disclosure and the recited claim features, it is the Examiner’s position that in-game sceneary includes the audio scene or the visual scene. Since Kolen discloses modifying music in the in-game scene, Kolen discloses the features recited in claims 21 and 22.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES S MCCLELLAN whose telephone number is (571)272-7167. The examiner can normally be reached Monday-Friday (8:30AM-5:00PM).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kang Hu can be reached at 571-270-1344. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/James S. McClellan/Primary Examiner, Art Unit 3715