Prosecution Insights
Last updated: April 19, 2026
Application No. 18/443,515

GENERATING A MUSICAL SCORE FOR A GAME

Non-Final OA §102
Filed
Feb 16, 2024
Examiner
CHAN, ALLEN
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Sony Interactive Entertainment Inc.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
473 granted / 679 resolved
At TC average
Strong +36% interview lift
Without
With
+35.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
26 currently pending
Career history
705
Total Applications
across all art units

Statute-Specific Performance

§101
18.5%
-21.5% vs TC avg
§103
39.9%
-0.1% vs TC avg
§102
21.0%
-19.0% vs TC avg
§112
12.2%
-27.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 679 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims In response to the Preliminary Amendment filed on February 16th, 2024, claims 4 and 10 have been amended. Claims 1-15 are currently pending. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-15 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Estanislao (US 2020/0406144 A1). Regarding claim 1, Estanislao discloses a music generation apparatus for generating information indicative of a musical score for a game, the music generation apparatus comprising: communication circuitry to communicate with an entertainment device, the communication circuitry being configured to receive a request to generate the information indicative of a musical score for a game, and to receive one or more keywords output by the entertainment device during processing of the game, the one or more keywords being indicative of one or more of an action and/or a condition associated with a user playing the game, and some or all of the game's state (see par. [0062], In accordance with some aspects of the present specification, the dynamic music generation module 140 implements a plurality of instructions or programmatic code to enable dynamic and real-time a) generation of music dependent on the player data representative of a virtual interaction and/or situation that the player encounters or engages in during gameplay, wherein the generated music is rendered or played on the client device 110 of the player, and b) augmentation, adaptation, re-mixing or modulation of the generated music on the basis of the responses, inputs, controls or movements of the player during a progression, development or advancement of the virtual interaction and/or situation; also see par. [0072], When any of the beginner, enthusiast or expert players encounters an interaction and/or situation of crucial merit or value the music generated is perceptible, reflective, or indicative of a frantic mood, for example. When any of the beginner, enthusiast, or expert players encounters an interaction and/or situation of marginal merit or value the music generated is perceptible, reflective or indicative of a calm mood, for example. When a beginner player encounters an interaction and/or situation of moderate merit or value the music generated is perceptible, reflective or indicative of a frantic mood, for example. On the other hand, when an enthusiast or expert player encounters an interaction and/or situation of moderate merit or value the music generated is perceptible, reflective or indicative of an energetic mood, for example; thus the player’s experience and type of situation are keywords used for modifying the music); and music generation circuitry responsive to the request to begin generating information indicative of the musical score, the music generation circuitry being responsive to receiving each of the one or more keywords to update the generation of the information indicative of the musical score in dependence on that keyword (see par. [0062], In accordance with some aspects of the present specification, the dynamic music generation module 140 implements a plurality of instructions or programmatic code to enable dynamic and real-time a) generation of music dependent on the player data representative of a virtual interaction and/or situation that the player encounters or engages in during gameplay, wherein the generated music is rendered or played on the client device 110 of the player, and b) augmentation, adaptation, re-mixing or modulation of the generated music on the basis of the responses, inputs, controls or movements of the player during a progression, development or advancement of the virtual interaction and/or situation). Regarding claim 2, Estanislao discloses wherein the communication circuitry is configured to receive an event signal indicative of a sudden event in the game; and the music generation circuitry is responsive to the event signal to: generate a sudden change in the musical score to coincide with the sudden event; or stop generating and/or outputting the information indicative of the musical score (see par. [0069], In some embodiments, at least a subset of the plurality of music elements is manipulated to generate music based on at least two vectors: 1) the importance, value, or merit of a particular game event, interaction or situation and 2) the player's skill and/or experience profile). Regarding claim 3, Estanislao discloses wherein the communication circuitry is configured to receive one or more seeds values; and the music generation circuitry is responsive to the or each seed to generate the information indicative of the musical score in dependence on the or each seed in conjunction with the or each keyword (see par. [0091], At step 302, each music clip, in a primary dataset of pre-stored ‘seed’ music clips, is encoded in a format suitable for input into the ML model for training). Regarding claim 4, Estanislao discloses wherein the or each seed is unique to one or more of: i. a developer of the game; ii. the game's title; and iii. one or more aspects of the game's state (see par. [0112], In other words, the module 140 may determine that the ‘seed’ music needs to be calm. Consequently, the module 140 selects a modulation data structure (from the second plurality of modulation data structures) associated with a music clip from the ninth dataset and feeds the modulation data structure as the second input to the at least one trained ML model. This results in the at least one trained ML model modulating the ‘seed’ music clip of the first input to generate or output a perceptibly calmer version of the ‘seed’ music). Regarding claim 5, Estanislao discloses a machine learning system trained with one or more keywords as input and information indicative of a musical score as a target output (see par. [0110], At step 330, the dynamic music generation module 140 leverages the at least one trained machine learning module to modulate the generated music based on the player's responses or reactions during progression of the player's engagement with the one or more virtual elements during gameplay). Regarding claim 6, Estanislao discloses in which the machine learning system has been trained with one or more current and preceding keywords as input and information indicative of a transition in musical score as a target output (see par. [0112], For example, the module 140 may determine that the player of ‘beginner’ profile who was engaged in gameplay of moderate merit or value (for example, attacking multiple enemies with sniper rifle) is losing and therefore may need to dynamically modulate the ‘seed’ music to make the experience less intense for the player. In other words, the module 140 may determine that the ‘seed’ music needs to be calm. Consequently, the module 140 selects a modulation data structure (from the second plurality of modulation data structures) associated with a music clip from the ninth dataset and feeds the modulation data structure as the second input to the at least one trained ML model). Regarding claim 7, Estanislao discloses in which the machine learning system has been trained with one or more seed values as further input, the seed values being uncorrelated with the target output (see par. [0091], At step 302, each music clip, in a primary dataset of pre-stored ‘seed’ music clips, is encoded in a format suitable for input into the ML model for training). Regarding claim 8, Estanislao discloses wherein the music generation apparatus is provided as one of: a cloud server accessible to the entertainment device via an internet connection; local circuitry accessible to the entertainment device via a wired link or a short-range wireless link; or circuitry integrated with the entertainment device (see par. [0066], In some embodiments, a portion of the programmatic instructions related to the dynamic music generation module 140 is implemented on the one or more game servers 105 while another portion of the programmatic instructions may reside and be implemented on a player's game module 121). Regarding claim 9, Estanislao discloses wherein the music generation circuitry is configured to generate, as the information indicative of the musical score, a Musical Instrument Digital Interface, MIDI, file or stream (see par. [0092], For example, if the music clips in the primary dataset are MIDI (Musical Instrument Digital Interface) files). Regarding claim 10, Estanislao discloses a system comprising: a music generation apparatus for generating information indicative of a musical score for a game, the music generation apparatus comprising: (i) communication circuitry to communicate with an entertainment device, the communication circuitry being configured to receive a request to generate the information indicative of a musical score for a game, and to receive one or more keywords output by the entertainment device during processing of the game, the one or more keywords being indicative of one or more of an action and/or a condition associated with a user playing the game, and some or all of the game's state (see par. [0062], In accordance with some aspects of the present specification, the dynamic music generation module 140 implements a plurality of instructions or programmatic code to enable dynamic and real-time a) generation of music dependent on the player data representative of a virtual interaction and/or situation that the player encounters or engages in during gameplay, wherein the generated music is rendered or played on the client device 110 of the player, and b) augmentation, adaptation, re-mixing or modulation of the generated music on the basis of the responses, inputs, controls or movements of the player during a progression, development or advancement of the virtual interaction and/or situation; also see par. [0072], When any of the beginner, enthusiast or expert players encounters an interaction and/or situation of crucial merit or value the music generated is perceptible, reflective, or indicative of a frantic mood, for example. When any of the beginner, enthusiast, or expert players encounters an interaction and/or situation of marginal merit or value the music generated is perceptible, reflective or indicative of a calm mood, for example. When a beginner player encounters an interaction and/or situation of moderate merit or value the music generated is perceptible, reflective or indicative of a frantic mood, for example. On the other hand, when an enthusiast or expert player encounters an interaction and/or situation of moderate merit or value the music generated is perceptible, reflective or indicative of an energetic mood, for example; thus the player’s experience and type of situation are keywords used for modifying the music); and (ii) music generation circuitry responsive to the request to begin generating information indicative of the musical score, the music generation circuitry being responsive to receiving each of the one or more keywords to update the generation of the information indicative of the musical score in dependence on that keyword (see par. [0062], In accordance with some aspects of the present specification, the dynamic music generation module 140 implements a plurality of instructions or programmatic code to enable dynamic and real-time a) generation of music dependent on the player data representative of a virtual interaction and/or situation that the player encounters or engages in during gameplay, wherein the generated music is rendered or played on the client device 110 of the player, and b) augmentation, adaptation, re-mixing or modulation of the generated music on the basis of the responses, inputs, controls or movements of the player during a progression, development or advancement of the virtual interaction and/or situation); and an entertainment device comprising: (i) processing circuitry to process a game state of the game in dependence on game data and the action and/or the condition associated with the user playing the game (see par. [0069], In some embodiments, at least a subset of the plurality of music elements is manipulated to generate music based on at least two vectors: 1) the importance, value, or merit of a particular game event, interaction or situation and 2) the player's skill and/or experience profile. In one embodiment, the dynamic music generation module 140 receives, from at least one of the game module 121 or master game module 120, data indicative of a player's movement, interactions, or situation in a game in real-time—that is, data indicative of the merit or value of an interaction and/or situation that the player encounters during gameplay); and (ii) selection circuitry to select, during processing of the game, in dependence on one or more of the action and/or the condition associated with the user, and the game state, the one or more keywords for use in generating information indicative of a musical score for the game (see par. [0069], In some embodiments, at least a subset of the plurality of music elements is manipulated to generate music based on at least two vectors: 1) the importance, value, or merit of a particular game event, interaction or situation and 2) the player's skill and/or experience profile. In one embodiment, the dynamic music generation module 140 receives, from at least one of the game module 121 or master game module 120, data indicative of a player's movement, interactions, or situation in a game in real-time—that is, data indicative of the merit or value of an interaction and/or situation that the player encounters during gameplay). Regarding claims 11 and 12, Estanislao discloses a method for generating information indicative of a musical score for a game, the method comprising: receiving a request to generate information indicative of a musical score for a game (see par. [0062], In accordance with some aspects of the present specification, the dynamic music generation module 140 implements a plurality of instructions or programmatic code to enable dynamic and real-time a) generation of music dependent on the player data representative of a virtual interaction and/or situation that the player encounters or engages in during gameplay); receiving one or more keywords output by an entertainment device during processing of the game, the one or more keywords being indicative of one or more of an action and/or a condition associated with a user playing the game and the game state (see par. [0072], When any of the beginner, enthusiast or expert players encounters an interaction and/or situation of crucial merit or value the music generated is perceptible, reflective, or indicative of a frantic mood, for example. When any of the beginner, enthusiast, or expert players encounters an interaction and/or situation of marginal merit or value the music generated is perceptible, reflective or indicative of a calm mood, for example. When a beginner player encounters an interaction and/or situation of moderate merit or value the music generated is perceptible, reflective or indicative of a frantic mood, for example. On the other hand, when an enthusiast or expert player encounters an interaction and/or situation of moderate merit or value the music generated is perceptible, reflective or indicative of an energetic mood, for example; thus the player’s experience and type of situation are keywords used for modifying the music); in response to the request and the or each keyword (see par. [0072], When any of the beginner, enthusiast or expert players encounters an interaction and/or situation of crucial merit or value the music generated is perceptible, reflective, or indicative of a frantic mood, for example; thus the music is being generated in response to keywords related to the player’s profile and type of situation); and generating the information indicative of the musical score for the game in dependence on the or each keyword (see par. [0072], When any of the beginner, enthusiast or expert players encounters an interaction and/or situation of crucial merit or value the music generated is perceptible, reflective, or indicative of a frantic mood, for example. When any of the beginner, enthusiast, or expert players encounters an interaction and/or situation of marginal merit or value the music generated is perceptible, reflective or indicative of a calm mood, for example. When a beginner player encounters an interaction and/or situation of moderate merit or value the music generated is perceptible, reflective or indicative of a frantic mood, for example. On the other hand, when an enthusiast or expert player encounters an interaction and/or situation of moderate merit or value the music generated is perceptible, reflective or indicative of an energetic mood, for example; thus the music is being generated in response to keywords related to the player’s profile and type of situation). Regarding claims 13-15, Estanislao discloses an entertainment device, comprising: processing circuitry to process a game state of a game in dependence on game data and an action and/or a condition associated with a user playing the game (see par. [0069], In some embodiments, at least a subset of the plurality of music elements is manipulated to generate music based on at least two vectors: 1) the importance, value, or merit of a particular game event, interaction or situation and 2) the player's skill and/or experience profile. In one embodiment, the dynamic music generation module 140 receives, from at least one of the game module 121 or master game module 120, data indicative of a player's movement, interactions, or situation in a game in real-time—that is, data indicative of the merit or value of an interaction and/or situation that the player encounters during gameplay); selection circuitry to select, during processing of the game, in dependence on one or more of the action and/or the condition associated with the user and the game state, one or more keywords for use in generating information indicative of a musical score for the game (see par. [0069], In some embodiments, at least a subset of the plurality of music elements is manipulated to generate music based on at least two vectors: 1) the importance, value, or merit of a particular game event, interaction or situation and 2) the player's skill and/or experience profile. In one embodiment, the dynamic music generation module 140 receives, from at least one of the game module 121 or master game module 120, data indicative of a player's movement, interactions, or situation in a game in real-time—that is, data indicative of the merit or value of an interaction and/or situation that the player encounters during gameplay). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Sharifi et al. (US 12,230,252 B2), Luo et al. (US 9,098,579 B2), Venti et al. (US 12,347,409 B1) Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALLEN CHAN whose telephone number is (571)270-5529. The examiner can normally be reached Monday-Friday, 11:00 AM EST to 7:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dmitry Suhol can be reached at (571) 272-4430. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALLEN CHAN/Primary Examiner, Art Unit 3715 12/13/2025
Read full office action

Prosecution Timeline

Feb 16, 2024
Application Filed
Dec 13, 2025
Non-Final Rejection — §102
Mar 19, 2026
Interview Requested
Apr 01, 2026
Examiner Interview Summary
Apr 01, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596516
Adaptive Synchronization of Objects in a Distributed Metaverse
2y 5m to grant Granted Apr 07, 2026
Patent 12582909
System and Method for Controlling Audio
2y 5m to grant Granted Mar 24, 2026
Patent 12569740
SYSTEM AND METHOD FOR DETERMINING THE MAXIMUM RUNNING SPEED OF A RUNNER AND USES THEREOF
2y 5m to grant Granted Mar 10, 2026
Patent 12564789
TUNING UPSCALING FOR EACH COMPUTER GAME OBJECT AND OBJECT PORTION BASED ON PRIORITY
2y 5m to grant Granted Mar 03, 2026
Patent 12558620
METHOD AND APPARATUS FOR RECORDING SCENE IN GAME, AND DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+35.7%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 679 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month