Prosecution Insights
Last updated: April 19, 2026
Application No. 17/886,604

MUSIC GENERATION DEVICE, MUSIC GENERATION METHOD, AND RECORDING MEDIUM

Non-Final OA §103§112
Filed
Aug 12, 2022
Examiner
SCOLES, PHILIP GRANT
Art Unit
2837
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Panasonic Holdings Corporation
OA Round
1 (Non-Final)
56%
Grant Probability
Moderate
1-2
OA Rounds
3y 10m
To Grant
77%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
30 granted / 54 resolved
-12.4% vs TC avg
Strong +21% interview lift
Without
With
+21.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
36 currently pending
Career history
90
Total Applications
across all art units

Statute-Specific Performance

§101
1.6%
-38.4% vs TC avg
§103
53.3%
+13.3% vs TC avg
§102
22.0%
-18.0% vs TC avg
§112
20.2%
-19.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 54 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statements (IDSs) submitted on 8/12/2022 and 9/8/2022 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 10 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 10 recites a conjunctive list of elements which derives its antecedent basis from a disjunctive list of the elements in claim 4. This renders the scope of claim 10 ambiguous. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-4, 6, and 10-17 are rejected under 35 U.S.C. 103 as unpatentable over Hutchings et al. ("Adaptive Music Composition for Games," July 2, 2019, retrieved November 20, 2025 from https://arxiv.org/pdf/1907.01154), hereinafter Hutchings, in view of Hoeberechts et al. (US 20100307320 A1, December 9, 2010), hereinafter Hoeberechts. Regarding claim 1, Hutchings teaches a music generation device that generates music (Hutchings § VI: "the AMS was developed as a stand-alone package that receives messages from video games in real-time to generate a model of the game-state and output music."), the music generation device comprising: an acquisition unit (Hutchings fig. 3: Spreading Activation Model. Hutchings § VI(A): "A model of spreading activation was implemented as a weighted, undirected graph G = (V, E), where V is a set of vertices and E a set of edges, using the Python library NetworkX. Each vertex represents a concept or affect, with the weight of the vertex wV : V ⇒ A (for activation A ∈ IR) representing activation between the values 0 and 100. Normalised edge weights wE : E ⇒ C (for association strength C ∈ IR) represent the level of association between vertices and are used to facilitate the spreading of activation. Three node types are used to represent affect (A), objects (O) and environments (N) such that V = A ∪ O ∪ N. The graph is not necessarily connected and edges never form between affect vertices… Every 30ms, the list of messages received by the OSC server is used to update the activation values of vertices in the graph.") that acquires first stream data (Hutchings fig. 3 and § VI(A): "Environments (N)") and second stream data different from the first stream data (Hutchings fig. 3 and § VI(A): "objects (O)"); an accompaniment generation unit that generates accompaniment information, which is music data indicating an accompaniment (Hutchings (VI)(B): "In the AMS, multiple software agents are used to generate musical arrangements… Agents were designed to have either Harmony, Melody or Percussive Rhythm roles for developing polyphonic compositions with a mixture of percussive and pitched instruments."), based on a change in the first stream data (Hutchings fig. 3 teaches "Environments" → "Establish Style" → "Harmony Agent"); a melody generation unit that generates melody information, which is music data indicating a melody (Hutchings (VI)(B): "In the AMS, multiple software agents are used to generate musical arrangements… Agents were designed to have either Harmony, Melody or Percussive Rhythm roles for developing polyphonic compositions with a mixture of percussive and pitched instruments."), based on a change in the second stream data (Hutchings fig. 3 teaches "Objects" → "Select Melodies" → "XCS Melody Manipulation" → "Melody Agent A"); and an output unit that outputs the generated musical piece information (Hutchings § VI: "the AMS was developed as a stand-alone package that receives messages from video games in real-time to generate a model of the game-state and output music."). Hutchings does not explicitly disclose a melody adjustment unit that adjusts the melody information in accordance with a key of the accompaniment indicated by the generated accompaniment information; and a music combining unit that combines the accompaniment information and the adjusted melody information to generate musical piece information. However, Hoeberechts teaches a melody adjustment unit that adjusts the melody information in accordance with a key of the accompaniment indicated by the generated accompaniment information (Hoeberechts ¶0153: "In any event, the line producer 220c maps the motif pattern onto a previously generated harmonic pattern by: converting the motif pattern to a harmony adjusted motif based on the previously generated harmonic pattern; bringing each note in the motif pattern into a range of a previously generated mode; and resolving each note in the motif pattern into at least one of a pitch of the previously generated mode and a nearby dissonant note, based on the harmonic chords in the previously generated harmonic pattern."); and a music combining unit that combines the accompaniment information and the adjusted melody information to generate musical piece information (Hoeberechts ¶0003: "A first aspect of the specification provides a flexible music composition engine, comprising a processing unit. The processing unit is enabled to create a pipeline for coordinating generation of a musical piece. The processing unit is further enabled to load at least one producer into the pipeline, the at least one producer for producing at least one high level musical element of the musical piece, independent of other producers in the pipeline. The processing unit is further enabled to call at least one generator, via the at least one producer, the at least one generator for generating at least one low level musical element of the musical piece. The processing unit is further enabled to integrate the at least one low level musical element and the at least one high level musical element, such that the processing unit produces the musical piece in real time."). It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the music generation device of Hutchings by adding the melody adjustment and combining units of Hoeberechts to generate music with a melody that is suitable to the accompaniment (Hutchings § VI(B)(2)). PNG media_image1.png 1217 670 media_image1.png Greyscale Regarding claim 2, Hutchings (in view of Hoeberechts) teaches a music generation device comprising the features of claim 1 as discussed above. Hutchings further teaches that the second stream data includes a plurality of pieces of stream data (Hutchings § VII: "In Zelda: MoS, concepts such as ‘Grandma’ and ‘Bakery’ were given an activation of 100 when on screen, because there was only one of each of these object types in the game."), and the melody generation unit generates the melody information indicating a plurality of melodies based on a change in each of the plurality of pieces of stream data (Hutchings § VI(B)(3): "The melodic theme assigned to the highest activated object is selected for use by melody agents when melodic content is needed in the score. A melody agent exists for each instrument defined in the score by the human composer."). Regarding claim 3, Hutchings (in view of Hoeberechts) teaches a music generation device comprising the features of claim 1 as discussed above. Hutchings further teaches that the accompaniment generation unit generates the accompaniment information in units of bars (Hutchings § VI(B): "Agents generate two measures worth of content at a time."). Hoeberechts further teaches that the melody generation unit generates the melody information in units of beats (Hoeberechts ¶¶ 0143-0144: "Table 1, Example of a Motif Encoded Independent of Mode and Harmony. Within Table 1, “Pitches” are positions in the mode relative to the root of the harmonic chord (with the root as 0), “Locations” indicate an offset from the beginning of the bar (location 0.0), and “Durations” specify the length of the note. Locations and durations are expressed in terms of number of beats (for example, in 6/8 time, an eighth note is one beat)."). Regarding claim 4, Hutchings (in view of Hoeberechts) teaches a music generation device comprising the features of claim 3 as discussed above. Hutchings further teaches that the melody generation unit changes at least one of raising or lowering of the melody, a rhythm, a chord, a volume, or a musical instrument based on the change in the second stream data (Hutchings § VII: "The activation of objects and environments (Fig. 2), through explicit programmatic rules, spreads to affect categories, which in turn affects parameters of the musical compositions." Hutchings § VI(B)(3): "XCS is used to evolve rules for modifying pre-composed melodic fragments using melody operators as actions… augmentation and diminution extend and shorten the length of each note element-wise, respectively by a constant factor and inversion inverts the pitch steps between notes."). Regarding claim 6, Hutchings (in view of Hoeberechts) teaches a music generation device comprising the features of claim 1 as discussed above. Hutchings further teaches that the melody generation unit expresses a rhythm by generating the melody in such a way that one beat is constituted by sounds represented by a plurality of notes in a case where an amount of change in the second stream data is larger than a predetermined threshold, and generating the melody in such a way that one beat is constituted by a sound represented by one note in a case where the amount of change is smaller than the predetermined threshold (Hutchinson § VI(B)(3): "the melody agents themselves should consider melodic shape, harmony and rhythmic qualities when adding to the composition. In this system a simple style score (P) is introduced that rates rhythmic density emphasis based on style using ad-hoc parameters tuned by ear. Parameters of notes per beat ( n b ) and binary value for phrase starting off the beat (1 = true, 0 = false) represented by o b ." Hutchinson's teaching in § VI(B)(3) of a notes per beat parameter and a threshold decision mechanism based on notes per second ( n s ) in the melody agents reasonably suggests a threshold for determining whether n b   consists of one note or multiple notes.). Regarding claim 10, Hutchings (in view of Hoeberechts) teaches a music generation device comprising the features of claim 4 as discussed above. Hutchings further teaches that the melody generation unit changes the raising or lowering of the melody and the rhythm (Hutchings § VI(B)(3): "XCS is used to evolve rules for modifying pre-composed melodic fragments using melody operators as actions… augmentation and diminution extend and shorten the length of each note element-wise, respectively by a constant factor and inversion inverts the pitch steps between notes."), the chord (Hutchings § VI(B)(2): "The confidence of the agent is compared with other agents in the AMS to establish a ‘leader’ agent at any given time." This suggests that a melody agent can influence the harmony agent's selection of a chord.), and the musical instrument (Hutchings § II: "Music in most modern video games displays some adaptive behaviour. Dynamic layering of instrument parts has become particularly common in commercial games, where instantaneous environment changes, such as speed of movement, number of competing agents and player actions, add or remove sonic layers of the score."). Hoeberechts further teaches changing the volume (Hoeberechts ¶0089: "At an optional step 420 the emotion mapper 240 can be called to adjust global settings (e.g. musical characteristics that are not changed elsewhere, such as tempo, volume etc.)."). Furthermore, segmentation of change into musical beats, bars (measures), and pieces (of which the BRI comprises any segmentation length of music) are commonly known to those of ordinarily skill in the art; selection of a unit of change comprises a result-effective variable. See MPEP § 2141.02(V). Regarding claim 11, Hutchings (in view of Hoeberechts) teaches a music generation device comprising the features of claim 1 as discussed above. Hoeberechts further suggests that the melody adjustment unit moves a pitch of a sound that is not included in the key of the accompaniment among a plurality of sounds constituting one beat of the melody (Hoeberechts ¶¶ 0143-0144: "Table 1, Example of a Motif Encoded Independent of Mode and Harmony. Within Table 1, “Pitches” are positions in the mode relative to the root of the harmonic chord (with the root as 0), “Locations” indicate an offset from the beginning of the bar (location 0.0), and “Durations” specify the length of the note. Locations and durations are expressed in terms of number of beats (for example, in 6/8 time, an eighth note is one beat).") to a pitch of a closest sound in a chord of the key (Hoeberechts ¶0153: "In any event, the line producer 220c maps the motif pattern onto a previously generated harmonic pattern by: converting the motif pattern to a harmony adjusted motif based on the previously generated harmonic pattern; bringing each note in the motif pattern into a range of a previously generated mode; and resolving each note in the motif pattern into at least one of a pitch of the previously generated mode and a nearby dissonant note, based on the harmonic chords in the previously generated harmonic pattern."). Regarding claim 12, Hutchings (in view of Hoeberechts) teaches a music generation device comprising the features of claim 1 as discussed above. Hutchings further suggests a plurality of decision functions for a plurality of levels (Hutchings § VI(A): "Each vertex represents a concept or affect, with the weight of the vertex wV : V ⇒ A (for activation A ∈ IR) representing activation between the values 0 and 100. Normalised edge weights wE : E ⇒ C (for association strength C ∈ IR) represent the level of association between vertices and are used to facilitate the spreading of activation. Three node types are used to represent affect (A), objects (O) and environments (N) such that V = A ∪ O ∪ N. The graph is not necessarily connected and edges never form between affect vertices. A new instance of the graph with the six affect categories of sadness, happiness, threat, anger, tenderness and excitement is spawned whenever a new game is started. The model is used to represent game context in real-time, regular messages from the game are used to update the graph.") indicating intensity of the melody (Hutchings § V: "Excitement was added as a category following consultation with professional game composers that revealed excitement to be an important aspect of emotion for scoring video games not covered in the basic affect categories. These categories are not intended to represent a complete cognitive model, rather a targeted selection of terms that balance expressive range with generalisability within the context of scoring music for video games."), wherein each of the plurality of levels includes a plurality of bands (Hutchings § VI(B)(3): "The activation of each affect category is represented with a 2 bit binary number, to show concept activations of 0-24, 25-49, 50-74 or 75-100 (see Section VI-A)") that define an amount of change in the second stream data (Hutchings § VII: " Other objects such as ‘Heart’ and ‘Big Green Knight’ were given activation levels of 20 for appearing on screen and increased by 10 for each interaction the player character had with them. Attacking a knight, or being attacked by knight counted as an interaction."), each of the plurality of decision functions is a function that defines a setting content of the melody corresponding to each of the plurality of bands (Hutchings § VI(B)(2): "Notes per second (ns), mean pitch interval (n¯) and ratio of diatonic to non-diatonic notes (d) in each phrase were used for the calculation of rewards"), and the melody generation unit sets a decision function corresponding to any one of the plurality of levels, and generates the melody based on a setting content of the set decision function (Hutchings § VI(B)(3): "At any point of gameplay, the knowledge graph represents the activation of emotion, object and environment concepts. The melodic theme assigned to the highest activated object is selected for use by melody agents when melodic content is needed in the score. A melody agent exists for each instrument defined in the score by the human composer and each agent adds melodic content every two measures. XCS is used to evolve rules for modifying pre-composed melodic fragments using melody operators as actions. It is used to reduce the search space of melody operations to support real-time responsiveness of the system."). Regarding claim 13, Hutchings (in view of Hoeberechts) teaches a music generation device comprising the features of claim 12 as discussed above. Hutchings further suggests that the setting content of the melody defined by the decision function includes at least one of a slope in raising or lowering of the melody, a rhythm of one beat, or a probability of a chord to be included in one beat (Hutchings § the melody agents themselves should consider melodic shape, harmony and rhythmic § VI(B)(3): "the melody agents themselves should consider melodic shape, harmony and rhythmic"). Regarding claim 14, Hutchings (in view of Hoeberechts) teaches a music generation device comprising the features of claim 1 as discussed above. Hutchings further suggests an ignition condition that defines in advance a condition as to whether or not to produce each of a plurality of melodies (Hutchings § VI(A): "Melodic themes were precomposed for a subset of known objects in the game. These themes were implemented as properties of object vertices (Eqn. 1)." Hutchings § VI(B)(3): "At any point of gameplay, the knowledge graph represents the activation of emotion, object and environment concepts. The melodic theme assigned to the highest activated object is selected for use by melody agents when melodic content is needed in the score. A melody agent exists for each instrument defined in the score by the human composer and each agent adds melodic content every two measures."), wherein the melody generation unit includes, in the melody information, one or more melodies satisfying the ignition condition (Hutchings § VI(B)(3): "Only melodic fragments resulting from rules with rewards estimated above a threshold value (R > 0.6 by default) are considered "). Regarding claim 15, Hutchings (in view of Hoeberechts) teaches a music generation device comprising the features of claim 14 as discussed above. Hutchings further suggests that the ignition condition includes any of a case where an amount of change in the second stream data exceeds a certain threshold, a case where a set time condition is satisfied, and a case where a set event has occurred (Hutchings § VI(B)(3): "A melody agent exists for each instrument defined in the score by the human composer and each agent adds melodic content every two measures."). Regarding claim 16, Hutchings teaches a music generation method performed by a processor of a music generation device that generates music (Hutchings § VI: "the AMS was developed as a stand-alone package that receives messages from video games in real-time to generate a model of the game-state and output music."), the music generation method comprising: acquiring (Hutchings fig. 3: Spreading Activation Model. Hutchings § VI(A): "A model of spreading activation was implemented as a weighted, undirected graph G = (V, E), where V is a set of vertices and E a set of edges, using the Python library NetworkX. Each vertex represents a concept or affect, with the weight of the vertex wV : V ⇒ A (for activation A ∈ IR) representing activation between the values 0 and 100. Normalised edge weights wE : E ⇒ C (for association strength C ∈ IR) represent the level of association between vertices and are used to facilitate the spreading of activation. Three node types are used to represent affect (A), objects (O) and environments (N) such that V = A ∪ O ∪ N. The graph is not necessarily connected and edges never form between affect vertices… Every 30ms, the list of messages received by the OSC server is used to update the activation values of vertices in the graph.") first stream data (Hutchings fig. 3 and § VI(A): "Environments (N)") and second stream data different from the first stream data (Hutchings fig. 3 and § VI(A): "objects (O)"); generating accompaniment information, which is music data indicating an accompaniment (Hutchings (VI)(B): "In the AMS, multiple software agents are used to generate musical arrangements… Agents were designed to have either Harmony, Melody or Percussive Rhythm roles for developing polyphonic compositions with a mixture of percussive and pitched instruments."), based on a change in the first stream data (Hutchings fig. 3 teaches "Environments" → "Establish Style" → "Harmony Agent"); generating melody information, which is music data indicating a melody (Hutchings (VI)(B): "In the AMS, multiple software agents are used to generate musical arrangements… Agents were designed to have either Harmony, Melody or Percussive Rhythm roles for developing polyphonic compositions with a mixture of percussive and pitched instruments."), based on a change in the second stream data (Hutchings fig. 3 teaches "Objects" → "Select Melodies" → "XCS Melody Manipulation" → "Melody Agent A"); and outputting the generated musical piece information (Hutchings § VI: "the AMS was developed as a stand-alone package that receives messages from video games in real-time to generate a model of the game-state and output music."). Hutchings does not explicitly disclose adjusting the melody information in accordance with a key of the accompaniment indicated by the generated accompaniment information; and combining the accompaniment information and the adjusted melody information to generate musical piece information. However, Hoeberechts teaches adjusting the melody information in accordance with a key of the accompaniment indicated by the generated accompaniment information (Hoeberechts ¶0153: "In any event, the line producer 220c maps the motif pattern onto a previously generated harmonic pattern by: converting the motif pattern to a harmony adjusted motif based on the previously generated harmonic pattern; bringing each note in the motif pattern into a range of a previously generated mode; and resolving each note in the motif pattern into at least one of a pitch of the previously generated mode and a nearby dissonant note, based on the harmonic chords in the previously generated harmonic pattern."); and combining the accompaniment information and the adjusted melody information to generate musical piece information (Hoeberechts ¶0003: "A first aspect of the specification provides a flexible music composition engine, comprising a processing unit. The processing unit is enabled to create a pipeline for coordinating generation of a musical piece. The processing unit is further enabled to load at least one producer into the pipeline, the at least one producer for producing at least one high level musical element of the musical piece, independent of other producers in the pipeline. The processing unit is further enabled to call at least one generator, via the at least one producer, the at least one generator for generating at least one low level musical element of the musical piece. The processing unit is further enabled to integrate the at least one low level musical element and the at least one high level musical element, such that the processing unit produces the musical piece in real time."). It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the music generation method of Hutchings by adding the melody adjustment and combining units of Hoeberechts to generate music with a melody that is suitable to the accompaniment (Hutchings § VI(B)(2)). Regarding claim 17, Hutchings teaches a non-transitory computer-readable recording medium for recording a music generation program in a music generation device that generates music (Hutchings § VI: "the AMS was developed as a stand-alone package that receives messages from video games in real-time to generate a model of the game-state and output music."), the music generation program causing a processor of the music generation device to perform: acquiring (Hutchings fig. 3: Spreading Activation Model. Hutchings § VI(A): "A model of spreading activation was implemented as a weighted, undirected graph G = (V, E), where V is a set of vertices and E a set of edges, using the Python library NetworkX. Each vertex represents a concept or affect, with the weight of the vertex wV : V ⇒ A (for activation A ∈ IR) representing activation between the values 0 and 100. Normalised edge weights wE : E ⇒ C (for association strength C ∈ IR) represent the level of association between vertices and are used to facilitate the spreading of activation. Three node types are used to represent affect (A), objects (O) and environments (N) such that V = A ∪ O ∪ N. The graph is not necessarily connected and edges never form between affect vertices… Every 30ms, the list of messages received by the OSC server is used to update the activation values of vertices in the graph.") first stream data (Hutchings fig. 3 and § VI(A): "Environments (N)") and second stream data different from the first stream data (Hutchings fig. 3 and § VI(A): "objects (O)"); generating accompaniment information, which is music data indicating an accompaniment (Hutchings (VI)(B): "In the AMS, multiple software agents are used to generate musical arrangements… Agents were designed to have either Harmony, Melody or Percussive Rhythm roles for developing polyphonic compositions with a mixture of percussive and pitched instruments."), based on a change in the first stream data (Hutchings fig. 3 teaches "Environments" → "Establish Style" → "Harmony Agent"); generating melody information, which is music data indicating a melody (Hutchings (VI)(B): "In the AMS, multiple software agents are used to generate musical arrangements… Agents were designed to have either Harmony, Melody or Percussive Rhythm roles for developing polyphonic compositions with a mixture of percussive and pitched instruments."), based on a change in the second stream data (Hutchings fig. 3 teaches "Objects" → "Select Melodies" → "XCS Melody Manipulation" → "Melody Agent A"); and outputting the generated musical piece information (Hutchings § VI: "the AMS was developed as a stand-alone package that receives messages from video games in real-time to generate a model of the game-state and output music."). Hutchings does not explicitly disclose adjusting the melody information in accordance with a key of the accompaniment indicated by the generated accompaniment information; and combining the accompaniment information and the adjusted melody information to generate musical piece information. However, Hoeberechts teaches adjusting the melody information in accordance with a key of the accompaniment indicated by the generated accompaniment information (Hoeberechts ¶0153: "In any event, the line producer 220c maps the motif pattern onto a previously generated harmonic pattern by: converting the motif pattern to a harmony adjusted motif based on the previously generated harmonic pattern; bringing each note in the motif pattern into a range of a previously generated mode; and resolving each note in the motif pattern into at least one of a pitch of the previously generated mode and a nearby dissonant note, based on the harmonic chords in the previously generated harmonic pattern."); and combining the accompaniment information and the adjusted melody information to generate musical piece information (Hoeberechts ¶0003: "A first aspect of the specification provides a flexible music composition engine, comprising a processing unit. The processing unit is enabled to create a pipeline for coordinating generation of a musical piece. The processing unit is further enabled to load at least one producer into the pipeline, the at least one producer for producing at least one high level musical element of the musical piece, independent of other producers in the pipeline. The processing unit is further enabled to call at least one generator, via the at least one producer, the at least one generator for generating at least one low level musical element of the musical piece. The processing unit is further enabled to integrate the at least one low level musical element and the at least one high level musical element, such that the processing unit produces the musical piece in real time."). It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the non-transitory computer-readable recording medium of Hutchings by adding the melody adjustment and combining units of Hoeberechts to generate music with a melody that is suitable to the accompaniment (Hutchings § VI(B)(2)). Claim 5 is rejected under 35 U.S.C. 103 as unpatentable over Hutchings in view of Hoeberechts, and further in view of Venkatasubramanian et al. (WO 2005010865 A2, February 3, 2005), hereinafter Venkatasubramanian. Regarding claim 5, Hutchings (in view of Hoeberechts) teaches a music generation device comprising the features of claim 1 as discussed above. However, Venkatasubramanian suggests that the melody generation unit changes a slope in raising or lowering of the melody based on an amount of change in the second stream data in a case where the amount of change is larger than a predetermined threshold (Venkatasubramanian p. 3, lines 10-12: "In this research, a threshold on the change of slope of the pitch curve marks notes, which are quantized to the nearest scale note."), and the melody generation unit does not change the slope in the raising or lowering of the melody in a case where the amount of change is smaller than the predetermined threshold (Venkatasubramanian p. 10, lines 18-24: "An abrupt jump occurs at the Ifa frame if successive note values differ by more than a threshold… If there is no abrupt jump at k, then we have a continuity at k."). It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the music generation device of Hutchings (as modified by Hoeberechts) by adding the thresholds of Venkatasubramanian to provide automatic information retrieval using melody information and continuity information (Venkatasubramanian p. 6, lines 2-3). Claim 7 is rejected under 35 U.S.C. 103 as unpatentable over Hutchings in view of Hoeberechts, and further in view of Vorobyev (US 20200302902 A1, September 24, 2020), hereinafter Vorobyev. Regarding claim 7, Hutchings (in view of Hoeberechts) teaches a music generation device comprising the features of claim 6 as discussed above. Hutchings (in view of Hoeberechts) does not explicitly disclose that at least one of the plurality of notes constituting one beat of the melody is a rest. However, Vorobyev teaches that at least one of the plurality of notes constituting one beat of the melody is a rest (Vorobyev ¶0059: "A note density delta constraint 320 g determines how the note density changes over time. Note density delta constraint 320 g may be applied through a curve/envelope control indicating which parts of the melody should have more notes and which parts have fewer notes. A maximum value on the curve will result in a densely-populated clump of notes clustered around the control point and a minimum value of 0 is equivalent to creating a “rest” in the notes with no notes generated in that space."). It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the music generation device of Hutchings (as modified by Hoeberechts) by adding the rests of Vorobyev to permit melodies to include rests between melody notes (Vorobyev ¶0032). Claims 8 is rejected under 35 U.S.C. 103 as unpatentable over Hutchings in view of Hoeberechts, and further in view of Brown et al. (US 9968305 B1, May 15, 2018), hereinafter Brown. Regarding claim 8, Hutchings (in view of Hoeberechts) teaches a music generation device comprising the features of claim 1 as discussed above. Hutchings (in view of Hoeberechts) does not explicitly disclose that the melody generation unit generates the melody in such a way that one beat is constituted by a sound including a chord in a case where an amount of change in the second stream data is larger than a predetermined threshold, and the melody generation unit generates the melody in such a way that one beat is constituted by a single sound in a case where the amount of change in the second stream data is smaller than the predetermined threshold. However, Brown suggests that the melody generation unit generates the melody in such a way that one beat is constituted by a sound including a chord (Brown col. 2, lines 16-19: "The computer runs software 300 to convert the raw data streams to sequences of musical notes 321, 322, and 323. In this specification, the term “musical notes” refers to individual notes or combinations of notes such as chords.") in a case where an amount of change in the second stream data is larger than a predetermined threshold (Brown col. 3, lines 6-8: "Sensitivity 375—This control is a threshold value that determines when the voice will sound in response to the EDA input signal."), and the melody generation unit generates the melody in such a way that one beat is constituted by a single sound (Brown col. 2, lines 16-19: "The computer runs software 300 to convert the raw data streams to sequences of musical notes 321, 322, and 323. In this specification, the term “musical notes” refers to individual notes or combinations of notes such as chords.") in a case where the amount of change in the second stream data is smaller than the predetermined threshold (Brown col. 3, lines 6-8: "Sensitivity 375—This control is a threshold value that determines when the voice will sound in response to the EDA input signal."). It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the music generation device of Hutchings (as modified by Hoeberechts) by adding the melody and chords of Brown to allow variations in signals to be interpreted as music (Brown abstract). Claim 9 is rejected under 35 U.S.C. 103 as unpatentable over Hutchings in view of Hoeberechts, and further in view of Balassanian (US 20140052282 A1, February 20, 2014), hereinafter Balassanian. Regarding claim 9, Hutchings (in view of Hoeberechts) teaches a music generation device comprising the features of claim 4 as discussed above. Hutchings (in view of Hoeberechts) does not explicitly disclose that the melody generation unit sets a volume of the melody to a first volume in a case where an amount of change in the second stream data is larger than a predetermined threshold, and the melody generation unit sets the volume of the melody to a second volume lower than the first volume in a case where the amount of change in the second stream data is smaller than the predetermined threshold. However, Balassanian suggests that the melody generation unit sets a volume of the melody (Balassanian ¶0021: "Music generator 130, in the illustrated embodiment, includes user interface 135, mood controller 140, music constructor 145, intake analyzer 150, and instruments 155. In various embodiments, music generator may include additional elements in place of and/or in addition to those shown. In some embodiments, music generator 130 is configured to determine musical attributes based on external data 180. In these embodiments, music generator 130 is configured to generate music content based on the musical attributes. Exemplary musical attributes generated by music generator 130 may include tempo (e.g., specified in beats per minute), key (e.g., B Flat Major), complexity, energy, variety, volume, spectrum, envelope, modulation, periodicity, rise and decay time, noise, etc., in various embodiments.") to a first volume in a case where an amount of change in the second stream data is larger than a predetermined threshold, and the melody generation unit sets the volume of the melody to a second volume lower than the first volume in a case where the amount of change in the second stream data is smaller than the predetermined threshold (Balassanian ¶0055: "In one embodiment, mood controller 140 is configured to receive information about a listener environment. Examples of this information include lighting information and sound information. For example, based on conversation volume in a room, mood controller 140 may be configured to adjust musical attributes to avoid drowning out conversation or to increase generated music volume at a concert with a boisterous crowd. In one embodiment, sound information may be measured by microphones in user devices communicating with music generator 130."). It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the music generation device of Hutchings (as modified by Hoeberechts) by adding the volume levels of Balassanian to increase generated music volume in a boisterous environment (Balassanian ¶0055). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHILIP SCOLES whose telephone number is (703)756-1831. The examiner can normally be reached Monday-Friday 8:30-4:30 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dedei Hammond can be reached on 571-270-7938. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHILIP G SCOLES/ Examiner, Art Unit 2837 /JEFFREY DONELS/Primary Examiner, Art Unit 2837
Read full office action

Prosecution Timeline

Aug 12, 2022
Application Filed
Nov 23, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603073
ELECTRONIC PERCUSSION INSTRUMENT, CONTROL DEVICE FOR ELECTRONIC PERCUSSION INSTRUMENT, AND CONTROL METHOD THEREFOR
2y 5m to grant Granted Apr 14, 2026
Patent 12597405
AUTO-RECORDING FOR MUSICAL INSTRUMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12597406
ELECTRONIC CYMBAL AND STRIKING DETECTION METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12586552
MULTI-LEVEL AUDIO SEGMENTATION USING DEEP EMBEDDINGS
2y 5m to grant Granted Mar 24, 2026
Patent 12579962
DEVICE AND ELECTRONIC MUSICAL INSTRUMENT
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
56%
Grant Probability
77%
With Interview (+21.3%)
3y 10m
Median Time to Grant
Low
PTA Risk
Based on 54 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month