DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Request for Continued Examination (RCE) The RCE filed January 21, 2026 is hereby acknowledged. Response to Arguments Applicant’s arguments filed January 21, 2026 with respect to claim s 1- 2, 6-7, and 14-15 rejected under 35 U.S.C. 10 3 have been considered but are moot because the new ground of rejection does not rely on the matter s specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim s 1, 2, 6, 7, 14, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Iwamoto et al. U.S. Patent Application Publication No. 2002/0033090 (hereinafter Iwamoto) in view of Huo et al. U.S. Patent Application Publication No. 2020/0066240 (hereinafter Huo) . Regarding claim 1, Iwamoto discloses an information processing apparatus comprising: circuitry configured to acquire music information (paragraph [0072], ‘ FIGS. 8a and 8b show, in combination, a flowchart of the process of extracting music piece data at the side of the terminal unit in an embodiment of the music composition assisting apparatus according to the present invention, in which the terminal unit extracts music piece template data from a source music piece data file. A step S111 exhibits a list of the names of the music piece data files stored in the terminal unit by downloading from a server or installing from a CD-ROM so that the user can select a desired one. A step S112 judges whether the user has selected a music piece from among the list. When not yet selected, the process of the step S111 is repeated, and when selected, the process flow proceeds to a step S113. ’); extract a plurality of types of feature amounts of features from the music information previously acquired (paragraph [0073], ‘ The step S113 extracts the elemental features of the selected music piece data file to create a music piece template data file. ’); and generate and store in a database association information (paragraph [0073]; “ Then, a step S114 (in FIG. 8b) put an identifier name such as "Reminding of so and so" to the extracted music piece template data file ”) in which the plurality of types of feature amounts extracted with predetermined identification information as music feature information , (paragraph [0073], ‘ From the chord part of the music piece data file, are extracted the chord progression data of the entire piece of music. From the melody part of the music piece data file, is extracted (automatically according to the program) a climactic portion of several measures, i.e. the bridge portion of the music piece, as a motif of the music piece. From this motif are extracted data concerning the time points and pitches of the notes constituting the motif, i.e. the respective time points and the general pitch curve, or a melody skeleton data string (consisting of time points and pitches of skeleton notes, i.e. notes of primary importance). Based on the aforementioned variation in the chord progression and/or the pitch comparison between the motif portion and other portions, the aforementioned pitch similar/contrastive data are extracted. From the accompaniment part (i.e. rhythm part), are extracted accompaniment style data (rhythm style data) and section progress data. Then, a step S114 (in FIG. 8b) put an identifier name such as "Reminding of so and so" to the extracted music piece template data file using a part of the title of the source music piece corresponding thereto. A step S115 stores the extracted music piece template data file and its identifier name in the memory, before returning to the main routine. ’) . Iwamoto further discloses storing association information that uses a title or other feature (paragraph [0037]). It is noted that rhythm style is recognized as a musical feature (paragraph [0073]). Further, Iwamoto discloses that association information is intended to be easily recognizable to the user (paragraph [0065]). To the extent that Iwamoto fails to explicitly recite association information representing a musical style, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to assign association information (i.e., identifier name) to correspond to a musical feature or style in order to make corresponding stored data easily recognizable to the user. Iwamoto fails to explicitly discloses the music feature information being arranged to have a data configuration structure that is recognizable as learning data by a machine learning process that performs composition processing. Huo discloses a system where composition processing is performed by configuring data to be recognized by a machine learning process (paragraph [0032]). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to configure music feature information in a manner that would be recognized by a machine learning process since using machine learning provides known benefits of reliably and quickly analyzing large amounts of data and doing so would yield predictable results. Regarding claim 2, Iwamoto discloses the information processing apparatus according to claim 1, wherein the acquisition unit acquires music information by receiving music information created by a producer using a music creation-related application installed in a terminal apparatus from the terminal apparatus (paragraph [0036], ‘Portable telephone (cellular phone) terminal units 2 and 3 are connected wirelessly with each other via a base station 4 (or base stations 4), which in turn is connected to the communication network 1, wherein the portable telephone terminal unit 2 is equipped with a music composition assisting apparatus. A server 5 is connected to the communication network 1 and functions to supply or deliver music piece template data and music piece data. Personal computers 6 and 7 are also connected to the communication network 1, wherein the personal computer 6 is equipped with a music composition assisting apparatus. The communication network 1 may be a LAN (local area network) connecting the server 5 and the personal computers 6 and 7 therein.’ ), the extraction unit extracts the plurality of types of feature amounts included in the music information, and the generation unit associates identification information of the producer with the music feature information (paragraph [0043], ‘A music piece template data file and its source music piece data file may be served in a set for downloading purpose. The template data and the source music piece data may be in different file formats, or may be formed in a single integrated data file. The music piece template data file contains information on the title of the source music piece, the names of the composer, the poet or the singer, or else. Such information will help the user in selecting a desired one from among a plurality of template files stored in the server 5. The template files maybe prepared selectable by designating the names of the source music.’ and paragraph [0073], ‘ From the chord part of the music piece data file, are extracted the chord progression data of the entire piece of music. From the melody part of the music piece data file, is extracted (automatically according to the program) a climactic portion of several measures, i.e. the bridge portion of the music piece, as a motif of the music piece. From this motif are extracted data concerning the time points and pitches of the notes constituting the motif, i.e. the respective time points and the general pitch curve, or a melody skeleton data string (consisting of time points and pitches of skeleton notes, i.e. notes of primary importance). Based on the aforementioned variation in the chord progression and/or the pitch comparison between the motif portion and other portions, the aforementioned pitch similar/contrastive data are extracted. From the accompaniment part (i.e. rhythm part), are extracted accompaniment style data (rhythm style data) and section progress data. ’). Regarding claim 6, Iwamoto discloses the information processing apparatus according to claim 1, further comprising a transmission unit that transmits presentation information of the music feature information according to instruction information received from a terminal apparatus in which a music creation-related application is installed ( Figure 10 ); and a composition unit that, upon receiving selection of the music feature information from the terminal apparatus, composes music information using machine learning on a basis of the selected music feature information and transmits the composed music information to the terminal apparatus ( Figures 9a, 9b, and 9c ). Regarding claim 7, Iwamoto discloses the information processing apparatus according to claim 6, further comprising an update unit that, when receiving performance information based on the music information transmitted by the composition unit from the terminal apparatus, adds the performance information to the selected music feature information and updates (i.e., edits) the selected music feature information (paragraph [0042], ‘ By applying a simple edition to the created music piece, the user can further add his/her tastes to the created music piece. If some points or features are missing in the analyzed template data, the user can easily compensate such, points and features through an editing operation. Among the above extracted data, the chord progression data, the data concerning pitches and time points of the notes at the motif section, and the accompaniment style data. If the newly created music piece data may not include a chord part, the extraction of the chord progression data may not be necessary. Further, if the newly created music piece may not contain an accompaniment part, the extraction of the accompaniment styles will not be necessary. ’). Regarding claim 14, method claim 14 is are drawn to the method of using the corresponding apparatus claimed in claim 1. Therefore method claim 14 corresponds to apparatus claim 1 and is rejected for the same reasons of anticipation as used above. Regarding claim 15, claim 15 has limitations similar to those treated in the above rejection(s), and are met by the reference as discussed above. Claim s 3, 4, 8, and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Iwamoto as modified by Huo and further in view of Kozielski et al. U.S. Patent Application Publication No. 2017/0092245 (hereinafter Kozielski) . Regarding claims 3 and 4, Iwamoto as modified by Huo discloses the information processing apparatus according to claim 2, wherein the music information created by the producer includes chord progression information indicating a chord progression and melody information indicating a melody (Iwamoto; paragraph [0037]), but fails to explicitly disclose a bass signal indicating a bass progression in a bar having a prescribed length. Kozielski discloses an apparatus for performing musical analysis and for composing music wherein music information created includes a bass signal indicating a bass progression in a bar having a prescribed length and wherein the music information created by the producer includes drum progression information indicating a drum progression in a bar having a prescribed length (paragraphs [0045] and [0054]). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to incorporate music information created including a bass signal indicating a bass progression in a bar having a prescribed length wherein the music information created by the producer includes drum progression information indicating a drum progression in a bar having a prescribed length since doing so would provide increased music creation functionality by incorporating a drum beat into the music created into the apparatus of Iwamoto as modified by Huo. Regarding claim 8, Iwamoto as modified by Huo discloses the information processing apparatus according to claim 1, wherein the extraction unit extracts from the music information, chord progression information indicating a chord progression, melody information indicating a melody (paragraphs [0045] and [0054]) , and the generation unit generates score information (i.e., template data) including chord progression information indicating a chord progression, and melody information indicating a melody and sets the score information as a component of the music feature information (paragraph [0037], ‘ The music piece template data file is given an identifier name "Reminding of so and so" using the title (or other feature) of the source music piece. The contents of the music piece template data file include chord progression data for an entire piece of music or at least for several measures, data concerning pitches and time points of notes for several measures which constitute a melody motif, pitch similar/contrastive data (pitch resemblance data), accompaniment style data of the music piece, and so forth, or at least one of these. ’), but fails to explicitly disclose a bass signal indicating a bass progression in a bar having a prescribed length as feature amounts, and score information including bass information indicating a bass sound progression in the bar having the prescribed length . Kozielski discloses an apparatus for performing musical analysis and for composing music wherein music information created includes a bass signal indicating a bass progression in a bar having a prescribed length and wherein the music information created by the producer includes drum progression information indicating a drum progression in a bar having a prescribed length (paragraphs [0045] and [0054]). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to incorporate music information created including a bass signal indicating a bass progression in a bar having a prescribed length wherein the music information created by the producer includes drum progression information indicating a drum progression in a bar having a prescribed length since doing so would provide increased music creation functionality by incorporating a drum beat into the music created into the apparatus of Iwamoto as modified by Huo. Regarding claim 9, Iwamoto as modified by Huo and Kozielski discloses the information processing apparatus according to claim 8, wherein the extraction unit extracts from the music information, drum information indicating a drum sound progression in the bar having the prescribed length as a feature amount (Kozielski; paragraphs [0045] and [0054]) , and the generation unit further adds the drum information to the score information. To the extent that Kozielski does not explicitly disclose adding drum information to score information (template data), .It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to incorporate drum information into score data since doing so would provide increased music creation functionality by incorporating a drum beat into the music created into the apparatus of Iwamoto as modified by Huo. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Iwamoto as modified by Huo in view of Inokuchi et al. U.S. Patent Application Publication No. 2005/0254390 (hereinafter Inokuchi) . Regarding claim 5, Iwamoto as modified by Huo discloses the information processing apparatus according to claim 1, including extracting a plurality of types of feature amount from music information and the generation unit associates identification information of the copyrighted music information with the music feature information (see claim 1) . but fails to explicitly disclose wherein the acquisition unit acquires copyrighted music information that is periodically registered at a preset timing, the extraction unit extracts the plurality of types of feature amounts from the copyrighted music information, and the generation unit associates identification information of the copyrighted music information with the music feature information. Inokuchi discloses a copyrighted music management and crediting system where an acquisition unit acquires copyrighted music information that is periodically registered at a preset timing (paragraph [0042], ‘ Reference numeral 104 denotes a user device having a reproducing function of the distributed music contents. The user device 104 has a function for reproducing the contents data including the distributed music contents and executing a reproduction charging process. That is, the user device 104 decodes the encryption of the distributed contents data and decodes the compression encoding, so that it can reproduce the music contents. The decoding of the contents data including the music contents is charged for. A contents distribution provider exists between the contents server 102 and user device 104 as necessary and distributes the contents data in the contents server 102 to the user. There are several means as distributing means which is used by the distribution provider. One of the means is a store 105. For example, a media in which the contents data has been recorded is distributed as a supplement of a magazine. A wire network 106 such as Internet or a CATV (cable television) is used as distributing means of the contents data. Further, a cellular phone network 107 and a satellite network 108 such as satellite broadcast, satellite communication, or the like are used as distributing means of the contents data. ’). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to incorporate acquiring copyrighted music information periodically registered at a preset timing and extracting the plurality of types of feature amounts from the copyrighted music information, and the generation unit associates identification information of the copyrighted music information with the music feature information since doing so would enable the apparatus of Iwamoto as modified by Huo to create or edit music based on information of popular music as well as the capability of compensating the producer monetarily by charging the music composer a fee for its use. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Iwamoto as modified by Huo and Kozielski and further in view of Serletic et al. U.S. Patent Application Publication No. 2018/0268792 (hereinafter Serletic) . Regarding claim 10, Iwamoto as modified by Huo and Kozielski discloses an information processing apparatus according to claim 8, but fails to explicitly disclose wherein the generation unit generates music format information in which identification information of the score information and identification information of the lyric information for a same bar are registered in association with each other, and sets the music format information as a component of the music feature information. Serletic discloses an apparatus that generates music format information in which identification information of the score information and identification information of the lyric information for a same bar are registered in association with each other, and sets the music format information as a component of the music feature information (paragraph [0059], ‘ . In some embodiments, the generated musical work may be received in the form of an audio file including a vocal rendering of the lyrical input entered by the user correlating with the music/melody of the musical input, either selected or created. In some embodiments, the voice synthesizer may generate the entire musical work including the vocal rendering of the lyrical input and the musical portion from the musical input. In other embodiments, the voice synthesizer may generate only a vocal rendering of the input text created based on the synthesizer input, which may be generated by analyzing the lyrical input and the musical input described above. In such embodiments, a musical rendering based on the musical input, or the musical input itself, may be combined with the vocal rendering to generate a musical work. ’ and paragraph [0111], ‘ The length of the note bars 608 with respect to the horizontal (i.e., time) axis 604 may also indicate for how long that particular lyric or group of lyrics may be played at the specified note. In some embodiments, the length of the note bar 608 may be adjusted by lengthening or shortening the note bar, and the note of the lyrics may be adjusted by moving the note bar with respect to the vertical (i.e., note) axis. ’ . It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of generating music format information in which identification information of the score information and identification information of the lyric information for a same bar are registered in association with each other, and sets the music format information as a component of the music feature information of Serletic into the apparatus of Iwamoto as modified by Huo and Kozielski since doing so would enable the apparatus of Iwamoto as modified by Huo and Kozielski to generate a more complete musical composition with a lyrical component. Allowable Subject Matter Claims 11-13 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT Daniell L. Negron whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-7559 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Mondays through Fridays between the hours of 7:30 AM and 5:00 PM (alternate Fridays off) . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Calvin L. Hewitt can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 272-6709 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Daniell L Negron/ Supervisory Patent Examiner, Art Unit 4100 February 6, 2026