Prosecution Insights
Last updated: April 19, 2026
Application No. 18/863,900

METHOD AND APPARATUS FOR GENERATING SONG LIST, ELECTRONIC DEVICE, AND STORAGE MEDIUM

Final Rejection §101§102§103§112
Filed
Nov 07, 2024
Examiner
PEREZ-ARROYO, RAQUEL
Art Unit
2161
Tech Center
2100 — Computer Architecture & Software
Assignee
Lemon Inc.
OA Round
2 (Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 5m
To Grant
90%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
171 granted / 296 resolved
+2.8% vs TC avg
Strong +32% interview lift
Without
With
+32.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
28 currently pending
Career history
324
Total Applications
across all art units

Statute-Specific Performance

§101
21.9%
-18.1% vs TC avg
§103
47.6%
+7.6% vs TC avg
§102
8.7%
-31.3% vs TC avg
§112
15.0%
-25.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 296 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This Office Action has been issued in response to Applicant’s Communication of amended application S/N 18/863,900 filed on December 4, 2025. Claims 1, 3 to 8, 10, 11, 15 to 18, and 20 to 23 are currently pending with the application. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: METHOD, ELECTRONIC DEVICE, AND STORAGE MEDIUM FOR GENERATING SONG LIST BASED ON SIMILARITY SCORE. Claim Objections Claims 1, 5, 10, and 11 are objected to because of the following informalities: Claim 1 recites the limitation “the first feature extraction model is configured to…” in line 14, which appears to contain a typographical error, and that should read “wherein the first feature extraction model is configured to…”. Same rationale applies to claims 10 and 11, since they recite similar limitations and contains similar deficiencies. Claim 5 recites the limitation “obtain the similarity score corresponding each of the at least one candidate song” in line 11, which appears to contain a typographical error, and that should read “obtain the similarity score corresponding to each of the at least one candidate song”. Appropriate corrections are required. Claim Rejections - 35 USC § 112 Claims 1, 3 to 8, 10, 11, 15 to 18, and 20 to 23 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation “the candidate library information” in line 11. There is insufficient antecedent basis for this limitation in the claim. Same rationale applies to claims 10 and 11, since they recite similar limitations, and to claims 3 to 8, 15 to 18, and 20 to 23, since they inherit the same deficiencies, by virtue of their dependency. Claim 8 further recites the limitations “before acquiring the candidate song library information, further comprising: determining a number of candidate songs, according a preset number of songs in the recommended song list; filtering a whole preset song library, according to the number of candidate songs, and determining candidate songs corresponding to the candidate song library information”. These limitations are not clear. More specifically, it is not clear how a number of candidate songs can be determined according to a preset number of songs in the recommended song list before acquiring the candidate song library information, when based on limitations recited in claim 1, on which claim 8 depends, the recommended song list is generated based on the candidate song library information, where it appears that the recommended song list does not exist before acquiring the candidate song library information, and therefore, “a preset number of songs in the recommended list” is not known. These deficiencies render claim 8 indefinite. Same rationale applies to claims 19 to 23, since they recite similar limitations. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3 to 8, 10, 11, 15 to 18, and 20 to 23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1, 13, and 20 recite determining a similarity score, determining a target song, generate a recommended song list, and candidate song list information. The limitation of determining a similarity score, which specifically recites “determining a similarity score of at least one candidate song, according to the candidate song library information and a target feature expression, wherein the target feature expression is a feature expression of a seed song, and the similarity score represents similarity between the candidate song and the seed song”, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, but for the recitation of generic computer components. That is, other than reciting “by a processor” (claim 10), nothing in the claim element precludes the steps from practically being performed in a human mind. For example, but for the “by a processor” language, “determining”, in the context of this claim encompasses the user mentally, with the aid of pen and paper, calculating a score of similarity by comparing attributes of songs with attributes of a seed song. The limitation of determining a target song, which specifically recites “determining a target song based on the similarity score of the candidate song”, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, but for the recitation of generic computer components. That is, other than reciting “by a processor”, nothing in the claim element precludes the steps from practically being performed in a human mind. For example, but for the “by a processor” language, “determining”, in the context of this claim encompasses the user mentally and with the aid of pen and paper, determining a target song based on the similarity score that was determined in the previous step. The limitation of generate a recommended song list, which specifically recites “generate a recommended song list based on the target song”, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, but for the recitation of generic computer components. That is, other than reciting “by a processor”, nothing in the claim element precludes the steps from practically being performed in a human mind. For example, but for the “by a processor” language, “generating”, in the context of this claim encompasses the user mentally and with the aid of pen and paper, writing down a list of songs that includes the previously determined target song. The limitation of “generate a candidate song library information”, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, but for the recitation of generic computer components. That is, other than reciting “by a processor”, nothing in the claim element precludes the steps from practically being performed in a human mind. For example, but for the “by a processor” language, “generating”, in the context of this claim encompasses the user mentally and with the aid of pen and paper, writing down information of songs. If a claim limitation, under its broadest reasonable interpretation, covers mental processes but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements – “acquiring candidate song library information, wherein the candidate song library information comprises feature expressions of a candidate song, and the feature expressions represent song features in a plurality of dimensions”, “wherein before acquiring the candidate library information, the method further comprises: processing the candidate song in a candidate library by using a first feature extraction model, to generate the candidate song library information, the first feature extraction model is configured to extract at least two first features of the candidate song”, “wherein after acquiring the candidate song library information, the method further comprises: acquiring a seed song in response to a user instruction; processing the seed song by using the first feature extraction model, to obtain the target feature expression, wherein the first feature extraction model is further configured to extract at least two first song features of the seed song”, a processor, and a storage. The limitations “acquiring candidate song library information, wherein the candidate song library information comprises feature expressions of a candidate song, and the feature expressions represent song features in a plurality of dimensions” and “acquiring a seed song in response to a user instruction” amount to data-gathering steps which is considered to be insignificant extra-solution activity (See MPEP 2106.05(g)). Continuing with the analysis of the additional limitations, in the limitations “wherein before acquiring the candidate library information, the method further comprises: processing the candidate song in a candidate library by using a first feature extraction model, to generate the candidate song library information, the first feature extraction model is configured to extract at least two first features of the candidate song” and “wherein after acquiring the candidate song library information, the method further comprises: processing the seed song by using the first feature extraction model, to obtain the target feature expression, wherein the first feature extraction model is further configured to extract at least two first song features of the seed song”, the limitations are recited at a high-level of generality, with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, and is equivalent to merely saying “applying it”, therefore, does not integrate the judicial exception into a practical application nor amount to significantly more. The processors and storage in these steps are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The insignificant extra-solution activity identified above, which include the data gathering steps, is recognized by the courts as well-understood, routine, and conventional activity when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (See MPEP 2106.05(d)(II)(i) Receiving or transmitting data over a network, e.g., using the Internet to gather data, buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)). The claims are not patent eligible. Claim 3 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 3 recites the same abstract idea of claim 1. The claim recites the additional limitations of “acquiring a second feature extraction model corresponding to the seed song, and processing the seed song based on the second feature extraction model to obtain the target feature expression, wherein the second feature extraction model is used to extract at least two second song features”. The acquiring limitations amount to data gathering steps, which is considered to be insignificant extra-solution activity (See MPEP 2106.05(g)), and recognized by the courts as well-understood, routine, and conventional activities when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (See MPEP 2106.05(d) (II)(i) Receiving or transmitting data over a network, e.g., using the Internet to gather data, buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)). The processing limitation is recited at a high-level of generality, with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, and is equivalent to merely saying “applying it”, therefore, does not integrate the judicial exception into a practical application nor amount to significantly more. The claim does not amount to significantly more than the abstract idea. Same rationale applies to claim 4, since it recites similar limitations. Claim 5 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 3 recites the same abstract idea of claim 1. The claim recites the additional limitations of “acquiring first feature scores corresponding to a plurality of target features of each of the at least one candidate song, according to the feature expressions of each of the at least one candidate song; acquiring second feature scores corresponding to a plurality of target features of the seed song, according to the target feature expression; calculating a distance between each first feature score corresponding to each of the at least one candidate song and each second feature score corresponding to the seed song to obtain the similarity score corresponding each of the at least one candidate song”, where the calculating limitation can be performed in the human mind with the aid of pen and paper, and therefore, is further elaborating on the abstract idea. Further, the acquiring limitations amount to data gathering steps, which is considered to be insignificant extra-solution activity (See MPEP 2106.05(g)), and recognized by the courts as well-understood, routine, and conventional activities when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (See MPEP 2106.05(d) (II)(i) Receiving or transmitting data over a network, e.g., using the Internet to gather data, buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)).The claim does not amount to significantly more. Same rationale applies to claim 6 since it recites similar limitations. Claim 7 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 7 recites the same abstract idea of claim 1. The claim recites the additional limitation of “filtering the target songs based on a third song feature to obtain first optimized songs; the generating a recommended song list based on the target songs comprises: generating a recommended song list based on the first optimized songs; wherein the third song feature comprises at least one of the following: a song language, a song style, a song release year, a song repetition, and a singer repetition”, which can be performed in the human mind and is therefore elaborating on the abstract idea. Therefore, the claim does not amount to significantly more than the abstract idea. Same rationale applies to claim 8, since it is also reciting limitations that can be performed in the human mind, and is therefore, further elaborating on the abstract idea. Additionally, the claims do not include a requirement of anything other than conventional, generic computer technology for executing the abstract idea, and therefore, do not amount to significantly more than the abstract idea. Same rationale applies to claims 15 to 18, and 20 to 23 since they recite similar limitations. Claims 1, 3 to 8, 10, 11, 15 to 18, and 20 to 23 are therefore not drawn to eligible subject matter as they are directed to an abstract idea without significantly more. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 3 to 6, 8, 10, 11, and 20 to 23 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Lu et al. (U.S. Publication No. 2009/0217804) hereinafter Lu. As to claim 1: Lu discloses: A method for generating a song list, comprising: acquiring candidate song library information, wherein the candidate song library information comprises feature expressions of a candidate song, and the feature expressions represent song features in a plurality of dimensions [Paragraph 0004 teaches comparing attributes of other songs (hence, acquiring candidate song library information) against attributes of a seed song; Paragraph 0005 teaches detecting attributes of the songs by extracting numeric features of the songs; Paragraph 0023 teaches extracting song features, which are features in multiple dimensions, and describe music properties]; determining a similarity score of at least one candidate song, according to the candidate song library information and a target feature expression, wherein the target feature expression is a feature expression of a seed song, and the similarity score represents similarity between the candidate song and the seed song [Paragraph 0032 teaches measuring the similarity between songs based on the obtained tags, where each song is represented by a profile comprising a 50-dimensional feature vector indicating the presence or absence of each tag; Paragraph 0035 teaches comparing the tags (features or song’s attributes) of the seed song with the tags of the other songs to determine the similarity between]; determining a target song based on the similarity score of the candidate song, and generate a recommended song list based on the target song [Paragraph 0035 teaches determining whether the seed song and the other song (candidate song) are sufficiently similar with respect to a threshold value; Paragraph 0036 teaches similar songs are added to the playlist, to generate a recommended song playlist]. wherein before acquiring the candidate library information, the method further comprises: processing the candidate song in a candidate library by using a first feature extraction model [Paragraph 0018 teaches obtaining and storing songs, e.g., in a local song data store, from a music service, or multiple services, therefore, candidate library; Paragraph 0021 teaches attribute detection logic processes the songs to obtain musical attributes], to generate the candidate song library information, the first feature extraction model is configured to extract at least two first features of the candidate song [Paragraph 0021 teaches detecting musical attributes for each song, which becomes the basis for music similarity measurement, where the attribute detection logic coupled to the song data store processes the songs and generate associated tags, therefore, generating the candidate song library information by extracting at least two features of the candidate songs; Paragraph 0023 teaches tags are generated in various stages, including music feature analysis which extract various features (e.g., 102-dimensional features) describing music properties, reducing features dimensions; Paragraph 0024 teaches a next stage includes learning, which models each tag as a Gaussian Mixture Model (or GMM, a common technique in pattern recognition) using a set of training data and the extracted features; Paragraph 0025 teaches a third stage referred to as automatic annotation, which leverages GMM to select the most probable tag from each category, therefore, processing the candidate songs using a feature extraction model], wherein after acquiring the candidate song library information, the method further comprises: acquiring a seed song in response to a user instruction (Examiner Note: Examiner respectfully points out that this limitation recites the contingent limitations “acquiring a seed song in response to a user instruction”. The broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the conditions precedent are not met (See MPEP 2111.04(II)). Therefore, since claim 1 is a method claim, the contingent limitations recited above are not required in claim 1, because the condition precedent is not met. That is, a limitation reading “receiving a user instruction” is not positively recited in the claim. Nonetheless, and in the interest of compact prosecution, all the limitations have been considered as if they are positively recited. Appropriate corrections are required.) [Paragraph 0030 teaches receiving a seed song, by a user selecting a song]; processing the seed song by using the first feature extraction model, to obtain the target feature expression, wherein the first feature extraction model is further configured to extract at least two first song features of the seed song [Paragraph 0005 teaches extracting numeric features of the song; Paragraph 0006 teaches attribute detection logic that generates (extracts) attributes of songs, where upon receiving a song, automatically generated attributes associated with that song may be used to build a playlist; Paragraph 0020 teaches analyzing categories that describe musical attributes (properties), which are then processed into a number of tags (e.g., fifty) associated with a song, and where the categories include genre, instruments, tempo, rhythm, energy, etc.]. As to claim 3: Lu discloses: acquiring a second feature extraction model corresponding to the seed song, and processing the seed song based on the second feature extraction model to obtain the target feature expression, wherein the second feature extraction model is used to extract at least two second song features [Paragraph 0005 teaches extracting numeric features of the song; Paragraph 0006 teaches attribute detection logic that generates (extracts) attributes of songs; Paragraph 0020 teaches analyzing categories that describe musical attributes (properties), which are then processed into a number of tags (e.g., fifty) associated with a song, and where the categories include genre, instruments, tempo, rhythm, energy, etc.; Paragraph 0023 teaches music feature analysis, various features are extracted (102-dimensional features) that describe low-level music properties, such as timbre and rhythm; Paragraph 0029 teaches upon receiving a seed songs, using the tags (attributes) associated with the seed song, to determine similarity between other songs]. As to claim 4: Lu discloses: acquiring a second feature extraction model corresponding to the seed song [Paragraph 0029 teaches upon receiving a seed songs, using the tags (attributes) associated with the seed song, to determine similarity between other songs]; processing the candidate song based on the second feature extraction model to obtain the candidate song library information [Paragraph 0005 teaches extracting numeric features of the song; Paragraph 0006 teaches attribute detection logic that generates (extracts) attributes of songs; Paragraph 0020 teaches analyzing categories that describe musical attributes (properties), which are then processed into a number of tags (e.g., fifty) associated with a song, and where the categories include genre, instruments, tempo, rhythm, energy, etc.; Paragraph 0023 teaches music feature analysis, various features are extracted (102-dimensional features) that describe low-level music properties, such as timbre and rhythm; Paragraph 0029 teaches upon receiving a seed songs, using the tags (attributes) associated with the seed song, to determine similarity between other songs]. As to claim 5: Lu discloses: determining a similarity score of at least one candidate song according to the candidate song library information and a target feature expression comprises: acquiring first feature scores corresponding to a plurality of target features of each of the at least one candidate song, according to the feature expressions of each of the at least one candidate song [Paragraph 0032 teaches each song is represented by a profile comprising a 50-dimensional feature vector indicating the presence or absence of each tag, where the vector represents the feature scores]; acquiring second feature scores corresponding to a plurality of target features of the seed song, according to the target feature expression [Paragraph 0032 teaches each song is represented by a profile comprising a 50-dimensional feature vector indicating the presence or absence of each tag, where the vector represents the feature scores]; calculating a distance between each first feature score corresponding to each of the at least one candidate song and each second feature score corresponding to the seed song to obtain the similarity score corresponding each of the at least one candidate song [Paragraph 0032 teaches similarity between songs is measured based on the above obtained tags, where similarity between two songs may be measured, e.g., by cosine distance as is known in vector space models]. As to claim 6: Lu discloses: determining a feature weighting factor corresponding to each target song feature, according to the seed song [Paragraph 0031 teaches computes a similarity weight value based upon an evaluation of the tags of the seed song and the tags of the other song to determine similarity between them]; calculating a weighted distance between each first feature score corresponding to each of the at least one candidate song and each second feature score corresponding to the seed song to obtain the similarity score corresponding to each of the at least one candidate song, based on the feature weighting factor corresponding to each target song feature [Paragraph 0031 teaches computes a similarity weight value based upon an evaluation of the tags of the seed song and the tags of the other song to determine similarity between them; Paragraph 0032 teaches similarity between songs is measured based on the above obtained tags, where similarity between two songs may be measured, e.g., by cosine distance as is known in vector space models, where the cosine distance may then be used as the weight]. As to claim 8: Lu discloses: before acquiring the candidate song library information, further comprising: determining a number of candidate songs, according to a preset number of songs in the recommended song list; filtering a whole preset song library, according to the number of candidate songs, and determining candidate songs corresponding to the candidate song library information, wherein a candidate song library is at least part of the whole preset song library [Paragraph 0033 teaches configurable parameters may adjust the list size; Paragraph 0035 teaches a maximum list size may be enforced, e.g., by ranking and selecting the top subset of the songs up to a maximum number, determining whether the song is sufficiently similar with respect to a threshold similarity value; Paragraph 0047 teaches a subset of the songs that are deemed similar]. Same rationale applies to claims 10, 11, and 20 to 23, since they recite similar limitations, and are therefore, similarly rejected. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 7, and 15 to 18 are rejected under 35 U.S.C. 103 as being unpatentable over Lu et al. (U.S. Publication No. 2009/0217804) hereinafter Lu, and further in view of Klamra (U.S. Publication No. 2021/0133235). As to claim 7: Lu discloses: after determining target song based on the similarity scores of the candidate songs, further comprising: filtering the target songs [Paragraph 0037 teaches refining the recommendation list]; wherein the third song feature comprises at least one of the following: a song language, a song style, a song release year, a song repetition, and a singer repetition [Paragraph 0020 teaches categories include genre, instruments, vocal, texture, production, tonality, rhythm, tempo, valence, and energy; Paragraph 0044 teaches users can filter the songs based on tempo, or mood, etc.]. Lu does not appear to expressly disclose filtering the songs based on a third song feature to obtain first optimized songs; the generating a recommended song list based on the target songs comprises: generating a recommended song list based on the first optimized songs. Klamra discloses: filtering the songs based on a third song feature to obtain first optimized songs [Paragraph 0007 teaches filtering the first list of media content items to generate a second list of media content items; Paragraph 0023 teaches filtering the first list of media content items based on user data to generate a second list of media content items; Paragraph 0085 teaches user data includes information about the user, and playback history, preferences of the user, e.g., characteristics of media content items that the user is likely to prefer]; the generating a recommended song list based on the target songs comprises: generating a recommended song list based on the first optimized songs [Paragraph 0007 teaches filtering the first list of media content items to generate a second list of media content items; Paragraph 0086 teaches generate a second list of media content items by removing content items from the first list that are not consistent with the user data; Paragraph 0087 teaches the filtered list of content items includes media content items that match the user’s taste; Paragraph 0102 teaches the user data includes information about a playback history of the user, input received from the user, and/or account (e.g., profile) information associated with the user, where respective media content items in the second list of media content items are associated with respective attributes]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of the cited references and modify the invention as taught by Lu, by filtering the songs based on a third song feature to obtain first optimized songs; the generating a recommended song list based on the target songs comprises: generating a recommended song list based on the first optimized songs, as taught by Klamra [Paragraph 0007, 0086, 0087], because both applications are directed to generation of music content playlists based on song attributes; by filtering the list based on additional attributes, the user’s experience is improved by providing a more accurate and relevant playlist to the user. Same rationale applies to claims 15 to 18, since they recite similar limitations. Response to Arguments The following is in response to arguments filed on December 4, 2025. Applicant’s arguments have been fully and respectfully considered, but are not persuasive. Claim Rejections - 35 USC § 112 In regards to claim 1 Applicant argues “Paragraph [0077] of the specification discloses that the number of the songs in the recommended song list (i.e., the number of songs in the song list corresponding to recommended song list) can be preset or predetermined. Thus, before obtaining the recommended song list, the number of the songs in the recommended song list is determined and acquired”. In response to the preceding argument, Examiner respectfully points out that determining a number of the songs based on a preset or predetermined number, where the preset number corresponds to the number of songs that will be included in the recommended song list, does not mean the same as “determining a number of candidate songs, according to a preset number of songs in the recommended list”. It appears that the intention of the limitation is to “determine a number of songs for the recommended song list”, or in other words to “determine a number of songs to be included in the recommended song list”, based on a preset number. Examiner respectfully suggests to clarify the language of the limitation in accordance with the intention as explained in the remarks, and in line with paragraph [0077] of the specification. Claim Rejections - 35 USC § 101 In regards to claim 1, Applicant argues that “the claim 1 does not recite a mental process because the claim, under its broadest reasonable interpretation, does not cover performance in the mind. For example, the claimed step of "processing the candidate song in a candidate library by using a first feature extraction model" now requires action by the first feature extraction model that cannot be practically applied in the mind. Similarly, the claimed step of "processing the seed song by using the first feature extraction model" cannot be practically performed in the human mind, because the human mind is not capable of constructing the first feature extraction model”. In response to the preceding argument, Examiner respectfully disagrees, and respectfully submits that merely invoking computers or machinery as a tool to perform an existing process, does not integrate a judicial exception into a practical application, therefore, it does not provide significantly more. Moreover, there is no indication of using a special or non-generic machine learning model to perform the operations. Adding a computer algorithm recited at a high-level of generality without significantly more, to a claim covering an abstract concept, is insufficient to render a claim eligible where the claims are silent as to how the algorithm aids the method, the extent to which a computer aids the method, or the significance of the computer to the performance of the method, and amounts to merely saying “apply-it”. In order for a machine to add significantly more, it must “play a significant part in permitting the claimed method to be performed, rather than function solely as an obvious mechanism for permitting a solution to be achieved more quickly”(See, e.g., Versata Development Group v. SAP America, 793 F.3d 1306, 1335, 115 USPQ2d 1681, 1702 (Fed. Cir. 2015); See MPEP 2106.05(f)(II)(v) Requiring the use of software to tailor information and provide it to the user on a generic computer, Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1370-71, 115 USPQ2d 1636, 1642 (Fed. Cir. 2015)). In regards to claim 1, Applicant argues that “the additional elements recited in claim 1 integrate the alleged abstract idea into a practical application”, and more specifically, that “the target feature expression has the same structure as and the feature expressions of the candidate songs, and therefore the similarity of the two can be evaluated in a plurality of dimensions in the subsequent steps, thereby improving the matching degree between the generated recommended song list and the seed song in a plurality of dimensions”. In response to the preceding argument, Examiner respectfully submits that the claims, as presently presented are silent in regards to any type of structure in relation to the feature expressions of the songs. Therefore, it is not clear how such language correlates with the claims. Furthermore, it is not clear how the elements highlighted above, including improving a matching degree, constitutes in an improvement in the functioning of a computer, or the improvement to another technology or technical field, that is achieved with the claimed invention, nor its correlation with the claim limitations. The claims are directed to an abstract idea without significantly more, under the “Mental Processes” group of abstract ideas. Claim Rejections - 35 USC § 102/103 In regards to claim 1, Applicant argues that “Lu appears to be entirely silent about extracting the features of the candidate song by using a first feature extraction model”. In response to the preceding argument, Examiner respectfully disagrees, and respectfully points out that the “feature extraction model” is not comprehensively defined in the claims as presently presented, nor the claims require a specific type or configuration of “extraction model”. Lu discloses an extraction logic used to detect, extract, and label the songs in a combination of steps which include the use of models, and therefore, discloses a feature extraction model as required by the claims, as presently presented. Lu [Paragraph 0021] teaches detecting musical attributes for each song, which becomes the basis for music similarity measurement, where the attribute detection logic coupled to the song data store processes the songs and generate associated tags, therefore, generating the candidate song library information by extracting at least two features of the candidate songs, where [Paragraph 0023] tags are generated in various stages, including music feature analysis which extract various features (e.g., 102-dimensional features) describing music properties, reducing features dimensions. Lu [Paragraph 0024] teaches that a next stage includes learning, which models each tag as a Gaussian Mixture Model (or GMM, a common technique in pattern recognition) using a set of training data and the extracted features, and further [Paragraph 0025] teaches a third stage referred to as automatic annotation, which leverages GMM to select the most probable tag from each category, therefore, processing the songs using a feature extraction model. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RAQUEL PEREZ-ARROYO whose telephone number is (571)272-8969. The examiner can normally be reached Monday - Friday, 8:00am - 5:30pm, Alt Friday, EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sherief Badawi can be reached at 571-272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RAQUEL PEREZ-ARROYO/Primary Examiner, Art Unit 2169
Read full office action

Prosecution Timeline

Nov 07, 2024
Application Filed
Aug 30, 2025
Non-Final Rejection — §101, §102, §103
Dec 04, 2025
Response Filed
Mar 21, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566786
NATURAL LANGUAGE PROCESSING WORKFLOW FOR RESPONDING TO CLIENT QUERIES
2y 5m to grant Granted Mar 03, 2026
Patent 12566726
ENABLING EXCLUSION OF ASSETS IN IMAGE BACKUPS
2y 5m to grant Granted Mar 03, 2026
Patent 12555109
DETERMINISTIC CONCURRENCY CONTROL FOR PRIVATE BLOCKCHAINS
2y 5m to grant Granted Feb 17, 2026
Patent 12547602
LOG ENTRY REPRESENTATION OF DATABASE CATALOG
2y 5m to grant Granted Feb 10, 2026
Patent 12517948
INFORMATION PROCESSING METHOD AND DEVICE FOR SORTING MUSIC IN A PLAYLIST
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
90%
With Interview (+32.3%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 296 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month