Prosecution Insights
Last updated: April 19, 2026
Application No. 18/368,342

ENHANCED E-BOOK PROVIDING REAL-TIME FEEDBACK AND GUIDED CONTENT FOR MUSICAL INSTRUCTION

Non-Final OA §101§103
Filed
Sep 14, 2023
Examiner
CASTILHO, EDUARDO D
Art Unit
3698
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Accelerando LLC
OA Round
1 (Non-Final)
47%
Grant Probability
Moderate
1-2
OA Rounds
3y 9m
To Grant
69%
With Interview

Examiner Intelligence

Grants 47% of resolved cases
47%
Career Allow Rate
135 granted / 289 resolved
-5.3% vs TC avg
Strong +22% interview lift
Without
With
+22.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
32 currently pending
Career history
321
Total Applications
across all art units

Statute-Specific Performance

§101
23.4%
-16.6% vs TC avg
§103
32.7%
-7.3% vs TC avg
§102
10.8%
-29.2% vs TC avg
§112
29.0%
-11.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 289 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The Information Disclosure Statements filed 12/22/2023 and 04/19/2024 have been considered. Initialed copies of the forms PTO-1449 are enclosed herewith. Election/Restrictions Applicant’s election without traverse of claims 41-48 in the reply filed on 12/19/2025 is acknowledged. Claims 1 and 3-20 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected subcombination, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 12/19/2025. Acknowledgements This Office Action addresses the response filed on 12/19/2025: Claims 2, 21-40 and 49-54 were canceled. Claims 1 and 3-20 were withdrawn. Claims 55-60 were newly introduced. Claims 41-48 and 55-60 are pending. Claims 41-48 and 55-60 were examined. Claim Interpretation / Definitions from the Specification The claims recite “machine learning module(s)”. According to the specification as filed, a “machine learning module” is defined as part of the “software instructions”: [0096] "…the software instructions include a machine learning module, also referred to herein as artificial intelligence software. As used herein, a machine learning module refers to a computer implemented process (e.g., a software function) that implements one or more specific machine learning algorithms…" Therefore, in view of the lexicographic definition above, the claimed term is not being interpreted as a generic placeholder for structure under 35 U.S.C. § 112(f), but rather a software component. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 41-48 and 55-60 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. According to MPEP 2106 II, It is essential that the broadest reasonable interpretation (BRI) of the claim be established prior to examining a claim for eligibility. Further, MPEP 2103 I C establishes that the subject matter of a properly construed claim is defined by the terms that limit the scope of the claim when given their broadest reasonable interpretation. It is this subject matter that must be examined. Regarding the independent claims, claims 41 and 48 recite “storing and/or providing... content elements, for display to and /or access by the individual user”, a statement of intended use or field use. See MPEP 2114 II. In the instant case, claims 41-47 are directed to a method, and claims 48 and 55-60 are directed to a system. Therefore, these claims fall within the four statutory categories of invention. Specifically, the language of the claims that recite an abstract idea are marked in bold below: a. “receiving, by a processor of a computing device, one or more measures of proficiency evaluating performance of the individual user on one or more musical pieces”;b. “automatically generating, by the processor, using one or more machine learning module(s), the one or more tailored content elements based on the one or more measures of proficiency”; andc. “storing and/or providing, by the processor, the one or more tailored content elements, for display to and/or access by the individual user.” Therefore, the portions highlighted in bold above recite monitoring and analyzing data, which is an abstract idea grouped within the and mental processes grouping of abstract ideas in prong one of step 2A of the Alice/Mayo two-part test (see MPEP 2106.04). The claims are grouped within mental processes because the steps recited describe collecting information, analyzing it, and displaying certain results of the collection and analysis, which is a concept that can be performed in the human mind or by pen and paper. Accordingly, the claims recite an abstract idea. This judicial exception is not integrated into a practical application. Specifically, with respect to using a processor of a computing device; and memory to perform the recited steps/functions, these additional elements performs the steps or functions such as: “receiving… measures…”, “generating… content elements…”, “storing… content elements…”. These additional elements are recited at a high-level of generality such that it represents no more than mere instructions to apply the exception using a generic computer component, which only serves to use computers as a tool to perform the abstract idea. Therefore, these elements do not integrate the abstract idea into a practical application because they require no more than a computer performing functions that correspond to acts required to carry out the abstract idea. The additional element(s) of an interactive electronic book, one or more machine learning module(s) amount to generally linking the use of the judicial exception to a particular technological environment or field of use. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Therefore, following the analysis of step 2A, prong two, the claims are still directed to an abstract idea. With respect to step 2B of the analysis, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional computer elements, such as a processor of a computing device; and memory, an interactive electronic book, one or more machine learning module(s). The a processor of a computing device; and memory perform the steps/functions of “receiving… measures…”, “generating… content elements…”, “storing… content elements…”, , and amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept beyond the abstract idea of monitoring and analyzing data. The additional element(s) of an interactive electronic book, one or more machine learning module(s) amount to generally linking the use of the judicial exception to a particular technological environment or field of use. As discussed above, taking the claim elements separately, these additional elements perform the steps or functions that correspond to the actions required to perform the abstract idea. Viewed as a whole, the combination of elements recited in the claims merely recite the concept of monitoring and analyzing data. Therefore, the claims are not eligible. Dependent claims 42-48 and 56-60 further recite the following additional language, in which elements which merely further define the identified abstract idea are marked in bold below: d) wherein the computing device is a server of a cloud- based system. e) comprising: receiving, by the processor, an audio signal corresponding to a first performance of the individual user on a first musical piece of the one or more musical pieces; automatically identifying, by the processor, one or more musical features from the received audio signal, said one or more musical features selected from (i), (ii), and (iii) as follows: (i) a sequence of pitches, (ii) a rhythm, and (iii) a tempo, and automatically determining, by the processor, where the identified one or more musical features agree with and /or deviate from corresponding reference features of the first musical piece; and determining, by the processor, the one or more measures of proficiency based at least in part on the determined agreement with and /or deviation from the reference features of the first musical piece. f) wherein the one or more tailored content elements comprise a generated musical piece. g) wherein the one or more machine learning modules comprise a first machine learning module that receives, as input, at least a portion of the one or more measures of proficiency, and generates, as output, a musical notation string comprising a plurality of characters representing the generated musical piece. h) wherein the one or more tailored content elements comprise a generated explanatory content element. i) wherein the one or more machine learning modules comprise a second machine learning module that receives, as input, at least a portion of the one or more measures of proficiency, and generates, as output, one or more text strings, each providing a human language explanation of a particular content area. Examiner notes that, for elements recited in the dependent claims which were previously analyzed as additional elements of the independent claims above (i.e. a processor of a computing device; and memory), the assessment of these elements under step 2A and step 2B for the dependent claims is inherited from the analysis of the independent claims and omitted for brevity, unless noted by Examiner below. With respect to claims 42 and 55, the claims include language which do not introduce additional elements/functions. The additional language merely represents statements directed to directed to non-functional descriptive material by describing what the device "is" (i.e. part of a system). Those statements are insufficient to significantly alter the eligibility analysis. This language further elaborates the abstract idea of monitoring and analyzing data identified in the analysis of independent claims 41 and 48. The additional elements/functions, alone or in combination, are insufficient to integrate the abstract idea into a practical application because the additional elements/functions do not pertain to an improvement to the functioning of a computer or to another technology. The additional elements/functions, alone or in combination, do not offer significantly more than the abstract idea, because the additional elements/functions simply append well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception. With respect to the eligibility analysis of claims 43 and 56, Further, the claims recite item e) above, which represents the additional elements/functions of receive an audio signal, identifying features and determining measures of proficiency. This language further elaborates the abstract idea of monitoring and analyzing data identified in the analysis of independent claims 41 and 48. The additional elements/functions, alone or in combination, are insufficient to integrate the abstract idea into a practical application because the additional elements/functions do not pertain to an improvement to the functioning of a computer or to another technology. The additional elements/functions, alone or in combination, do not offer significantly more than the abstract idea, because the additional elements/functions simply append well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception. Examiner notes the claims further describe the mental process of collecting information, analyzing it, and displaying certain results of the collection and analysis. With respect to claims 44 and 57, the claims include language which do not introduce additional elements/functions. The additional language merely represents statements directed to directed to non-functional descriptive material by describing what the content elements "comprise" (i.e. a musical piece). Those statements are insufficient to significantly alter the eligibility analysis. This language further elaborates the abstract idea of monitoring and analyzing data identified in the analysis of independent claims 41 and 48. The additional elements/functions, alone or in combination, are insufficient to integrate the abstract idea into a practical application because no further additional elements/functions are introduced to the BRI of the claims. The additional elements/functions, alone or in combination, do not offer significantly more than the abstract idea, because no further additional elements/functions are introduced to the BRI of the claims. With respect to claims 45 and 58, the claims include language which do not introduce additional elements/functions. The additional language merely represents statements directed to directed to non-functional descriptive material by describing what the software (i.e. first machine learning module) comprises . Those statements are insufficient to significantly alter the eligibility analysis. This language further elaborates the abstract idea of monitoring and analyzing data identified in the analysis of independent claims 41 and 48. The additional elements/functions, alone or in combination, are insufficient to integrate the abstract idea into a practical application because the additional elements/functions do not pertain to an improvement to the functioning of a computer or to another technology. The additional elements/functions, alone or in combination, do not offer significantly more than the abstract idea, because the additional elements/functions simply append well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception. Examiner notes the claims recite what the "one or more machine learning module(s)" comprise, this language is directed to stored data. Specifically, while the independent claims recite one or more modules being “used” in the generating step/function, the transitional term “comprising” is synonymous with “including,” “containing,” or “characterized by,” and is inclusive or open-ended and does not exclude additional, unrecited elements or method steps. Emphasis added. See, e.g., Mars Inc. v. H.J. Heinz Co., 377 F.3d 1369, 1376, 71 USPQ2d 1837, 1843 (Fed. Cir. 2004) and MPEP 2111.03. Since the generating step/function is based upon (any) one of "one or more machine learning module(s)", the resulting content elements can be also based upon elements (i.e. modules) or method steps extraneous to the description of a "first machine learning module”. In other words, the generating step can be made using other unrecited matter than the "first machine learning module”. Therefore, the language “wherein the one or more machine learning modules comprise a first machine learning module that…” is directed to stored data (i.e. description of a software component not required by the claims to be executed). With respect to claims 46 and 59, the claims include language which do not introduce additional elements/functions. The additional language merely represents statements directed to directed to non-functional descriptive material by describing what the content elements "comprise" (i.e. a generated explanatory content element). Those statements are insufficient to significantly alter the eligibility analysis. This language further elaborates the abstract idea of monitoring and analyzing data identified in the analysis of independent claims 41 and 48. The additional elements/functions, alone or in combination, are insufficient to integrate the abstract idea into a practical application because no further additional elements/functions are introduced to the BRI of the claims. The additional elements/functions, alone or in combination, do not offer significantly more than the abstract idea, because no further additional elements/functions are introduced to the BRI of the claims. With respect to claims 47 and 60, the claims include language which do not introduce additional elements/functions. The additional language merely represents statements directed to directed to non-functional descriptive material by describing what the software (i.e. second machine learning module) comprises . Those statements are insufficient to significantly alter the eligibility analysis. This language further elaborates the abstract idea of monitoring and analyzing data identified in the analysis of independent claims 41 and 48. The additional elements/functions, alone or in combination, are insufficient to integrate the abstract idea into a practical application because the additional elements/functions do not pertain to an improvement to the functioning of a computer or to another technology. The additional elements/functions, alone or in combination, do not offer significantly more than the abstract idea, because the additional elements/functions simply append well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception. Examiner notes the claims recite what the "one or more machine learning module(s)" comprise, this language is directed to stored data. Specifically, while the independent claims recite one or more modules being “used” in the generating step/function, the transitional term “comprising” is synonymous with “including,” “containing,” or “characterized by,” and is inclusive or open-ended and does not exclude additional, unrecited elements or method steps. Emphasis added. See, e.g., Mars Inc. v. H.J. Heinz Co., 377 F.3d 1369, 1376, 71 USPQ2d 1837, 1843 (Fed. Cir. 2004) and MPEP 2111.03. Since the generating step/function is based upon (any) one of "one or more machine learning module(s)", the resulting content elements can be also based upon elements (i.e. modules) or method steps extraneous to the description of a "second machine learning module”. In other words, the generating step can be made using other unrecited matter than the "second machine learning module”. Therefore, the language “wherein the one or more machine learning modules comprise a second machine learning module that…” is directed to stored data (i.e. description of a software component not required by the claims to be executed). Therefore, while the additional language d) - i) of dependent claims 42-47 and 55- 60 slightly modify the analysis provided with respect to independent claims 41 and 48, these additional elements/functions are insufficient to render the dependent claims eligible, as detailed above. Therefore, these dependent claims are also ineligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 41-48 and 55-60 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Si et al. (US 2021/0104169 A1) (hereinafter Si) in view of Aprameya (NPL 2018, listed in PTO-892 as reference "U") With respect to claims 41 and 48, Si teaches a system for automatically generating one or more tailored content elements customized to an individual user of an interactive electronic book (see Fig. 3, devices 320-360, paragraphs [0042]-[0043]); and a method for automatically generating one or more tailored content elements customized to an individual user of an interactive electronic book (System and method for AI based skill learning) comprising: (a) receiving, by a processor of a computing device, one or more measures of proficiency evaluating performance of the individual user on one or more musical pieces (see Fig. 8, step 850, paragraph [0068]: “The second part of the AI based skill learning system 350 is to adapt the animated skill learning tutoring based on an adaptively modified tutoring plan devised based on actually observed real-time learning performance of the skill learner. FIG. 8B is a flowchart of an exemplary process of the second aspect of the AI based skill learning system for adaptively tutoring a skill based on an animated tutoring script and dynamic observations, in accordance with an embodiment of the present teaching. Once the animated tutoring instructions are provided or delivered to create an augmented reality scene (see FIGS. 2A and 7), sensors on the wearable 230 or the device 240 are utilized to make observations, at 850, of the skill learner's performance. Such observed information is sent to the audio/visual information analyzer 750 which then analyzes, at 860, the skill learner's performance in terms of following the animated tutoring instructions. The analysis on observations in each modality (e.g., audio or video) may be performed individually or jointly. The analysis may yield various measures in different modalities. For example, hand positions with respect to observed keys on a piano, spatial configurations among different fingers, movements of the fingers, etc. Acoustically, the analysis may yield different measurements such as the rhythms, sound patterns, etc. resulted from the skill learner's performance.”); (b) automatically generating, by the processor... the one or more tailored content elements based on the one or more measures of proficiency (see Fig. 8, steps 870-880, paragraph [0069]: “Such measurements from the dynamic observations may be further processed to identify, at 870, discrepancies between expected performance and the skill learner's actual performance. This is achieved by the discrepancy identifier 760. For example, visually it may be analyzed whether the skill learner's hands/fingers were positioned as shown in the augmented reality scene, whether the skill learner's hands/fingers moved in accordance with the visual/audio instructions. In addition, acoustically, audio information observed may also be analyzed in light of the expected sound effect as expected to obtain discrepancy in the audio domain. Based on the discrepancies, the adaptive tutoring plan generator 770 may generate accordingly, at 880, an adaptive tutoring plan with respect to the discrepancies. In some embodiments, such modification may be adapted based on the playing speed.”); and (c) storing and /or providing, by the processor, the one or more tailored content elements, for display to and /or access by the individual user (see Fig. 8, step 890, paragraph [0069]: “...In some embodiments, the adjustment to the tutoring plan may be to return to some more teaching content to be delivered to the skill learner. In some embodiments, the modification may also be personalized based on the learning history of the current skill learner. With the adaptively modified tutoring plan, the user interface 700 may communicate, at 890, with the skill learner using the adapted tutoring plan, which may include informing the skill learner the adjustment to the tutoring content before proceeding to carrying out the adjust tutoring plan via the audio/visual information projector 740 to deliver the modified tutoring content to the skill learner."). Si does not explicitly disclose a method and system comprising: [the one or more tailored content element are automatically generated] using one or more machine learning module(s). However, Aprameya discloses a method and system (An autonomous intelligent music teacher) comprising: [the one or more tailored content element are automatically generated] using one or more machine learning module(s) (see Fig. 1, Autonomous Intelligent Music Teacher, page 13, "After evaluating a student’s performance, the computer holds data on the student’s strengths and weaknesses. This system, utilizing AI’s decision-making skills, strives to evolve to best fit a student’s needs. For example, if AIMT ranks the tempo of a performance as “failing,” it takes this into account when generating a new piece). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the machine learning and AI abilities as disclosed by Aprameya in the method and system of Si, the motivation being to truly respond to a performance's result, by evaluating a student's performance to discover which musical aspects are more difficult to grasp, acting as a teacher by providing a more interactive environment and catering to the student's requirements. (see Aprameya, page 12). With respect to the BRI of the claims, Examiner notes that claims 41 and 48 recite “storing and/or providing... content elements, for display to and /or access by the individual user” , a statement of intended use or field use. See MPEP 2114 II. With respect to claims 42 and 55, the combination of Si and Aprameya teaches all the subject matter of the method and system as described above with respect to claims 41 and 48. Furthermore, Si disclose a method and system wherein the computing device is a server of a cloud- based system (see Fig. 3, network 330, AI based skill learning system 350, paragraphs [0042], [0043] and [0071]). The motivation for combining the references remain unaltered from the motivation described above in conjunction with the rejection of the independent claims. With respect to claims 43 and 56, the combination of Si and Aprameya teaches all the subject matter of the method and system as described above with respect to claims 41 and 48. Furthermore, Si disclose a method and system comprising: receiving, by the processor, an audio signal corresponding to a first performance of the individual user on a first musical piece of the one or more musical pieces (see Fig. 4A, paragraph [0044]: “FIG. 4A is a flowchart of an exemplary high level process of creating animated tutoring scripts based on online information, in accordance with an embodiment of the present teaching. To generate an animated tutoring script, media data about a performance are first received at 400 by the animated tutoring script generator 320. Performance can be an artistic performance or a recording of some process in which a person conducted a sequence of operation, e.g., playing on drums, playing a piece of music on a musical instrument, . . . , assembling a device/equipment, operating on an equipment, etc.). In some embodiments, such received media data correspond to multimedia information with media data across different modalities such as a video which includes visual, audio, and optionally text information.”).; automatically identifying, by the processor, one or more musical features from the received audio signal, said one or more musical features selected from (i), (ii), and (iii) as follows: (i) a sequence of pitches, (ii) a rhythm, and (iii) a tempo, and automatically determining, by the processor, where the identified one or more musical features agree with and /or deviate from corresponding reference features of the first musical piece (see Fig. 8B, step 850, paragraph [0068]: “... sensors on the wearable 230 or the device 240 are utilized to make observations, at 850, of the skill learner's performance. Such observed information is sent to the audio/visual information analyzer 750 which then analyzes, at 860, the skill learner's performance in terms of following the animated tutoring instructions. The analysis on observations in each modality (e.g., audio or video) may be performed individually or jointly. The analysis may yield various measures in different modalities. For example, hand positions with respect to observed keys on a piano, spatial configurations among different fingers, movements of the fingers, etc. Acoustically, the analysis may yield different measurements such as the rhythms, sound patterns, etc. resulted from the skill learner's performance.”); and determining, by the processor, the one or more measures of proficiency based at least in part on the determined agreement with and /or deviation from the reference features of the first musical piece (see Fig. 8B, step 870, discrepancies, paragraph [0069]: “Such measurements from the dynamic observations may be further processed to identify, at 870, discrepancies between expected performance and the skill learner's actual performance. This is achieved by the discrepancy identifier 760. ). The motivation for combining the references remain unaltered from the motivation described above in conjunction with the rejection of the independent claims. With respect to claims 44 and 57, the combination of Si and Aprameya teaches all the subject matter of the method and system as described above with respect to claims 41 and 48. Furthermore, Si disclose a method and system wherein the one or more tailored content elements comprise a generated musical piece (see paragraph [0069]: “... Based on the discrepancies, the adaptive tutoring plan generator 770 may generate accordingly, at 880, an adaptive tutoring plan with respect to the discrepancies. In some embodiments, such modification may be adapted based on the playing speed. In some embodiments, the adjustment to the tutoring plan may be to return to some more teaching content to be delivered to the skill learner. In some embodiments, the modification may also be personalized based on the learning history of the current skill learner. With the adaptively modified tutoring plan, the user interface 700 may communicate, at 890, with the skill learner using the adapted tutoring plan, which may include informing the skill learner the adjustment to the tutoring content before proceeding to carrying out the adjust tutoring plan via the audio/visual information projector 740 to deliver the modified tutoring content to the skill learner.”; Examiner notes the "generated animated tutoring script incorporates information about the piece of music, instructional information (visual or oral) on, e.g., which finger is on which key and optional annotated timing/playing information, which may be synchronized with the music.” - see Fig. 3, -paragraph [0040]). Regarding the BRI of the claims, Examiner notes that claims 44 and 57 recite “wherein the one or more tailored content elements comprise a generated musical piece.”, language directed to non-functional descriptive material. See MPEP 2111.05. The motivation for combining the references remain unaltered from the motivation described above in conjunction with the rejection of the independent claims. With respect to claims 45 and 58, the combination of Si and Aprameya teaches all the subject matter of the method and system as described above with respect to claims 44 and 57. Furthermore, Aprameya disclose a method and system wherein the one or more machine learning modules comprise a first machine learning module that receives, as input, at least a portion of the one or more measures of proficiency, and generates, as output, a musical notation string comprising a plurality of characters representing the generated musical piece (see Fig. 1, Generate music, pages 13-14: "Demonstrated by the link in Fig. 1 between “Adapting to a Performance” and “Generate Music,” the process of creating music remains nearly identical to the method previously described, differing with respect to what AIMT knows about the student. It has now collected data on what musical aspects a student has yet to master; therefore, when asking for the parameters of the next piece, the system should fill in certain aspects on its own. If a student has yet to learn a scale, AIMT does not allow any new specifications for the key signature, which indicates what scale a piece is in. It then creates a piece with the same key signature as the previous one"). Regarding the BRI of the claims, Examiner notes that claims 45 and 58 is a method claim and recites “a first machine learning module that receives, as input, at least a portion of the one or more measures of proficiency, and generates, as output, a musical notation string comprising a plurality of characters representing the generated musical piece...”, language directed to not positively recited method steps. The motivation for combining the references remain unaltered from the motivation described above in conjunction with the rejection of the independent claims. With respect to claims 46 and 59, the combination of Si and Aprameya teaches all the subject matter of the method and system as described above with respect to claims 41 and 48. Furthermore, Si disclose a method and system wherein the one or more tailored content elements comprise a generated explanatory content element (see paragraph [0037]: “In some embodiments, an animated tutoring script may also incorporate oral instructions to be used in connection with certain selected learning mode. For example, oral instructions may be invoked to instruct, in a synchronous manner, a skill learner orally while the skill learner is following the visual instructions provided. Such oral instructions may be synchronized with the visual instructions whenever appropriate. For instance, in learning how to operate an equipment, the visual instructions may visually show a skill learner how to physically operate the equipment and oral instructions may be synchronously provided to deliver other relevant instructions (e.g., hold down the button for no longer than 10 seconds). In some situations (e.g., piano playing skill learning), any oral instructions may be invoked only when certain conditions are met, e.g., the tutoring speed is set below a certain threshold (otherwise there may not be possible to playback the oral instruction).”;). Regarding the BRI of the claims, Examiner notes that claims 46 and 59 recite “wherein the one or more tailored content elements comprise a generated explanatory content element”, language directed to non-functional descriptive material. The motivation for combining the references remain unaltered from the motivation described above in conjunction with the rejection of the independent claims. With respect to claims 47 and 60, the combination of Si and Aprameya teaches all the subject matter of the method and system as described above with respect to claims 46 and 59. Furthermore, Si disclose a method and system wherein the one or more machine learning modules comprise a second machine learning module that receives, as input, at least a portion of the one or more measures of proficiency, and generates, as output, one or more text strings, each providing a human language explanation of a particular content area (see oral communication content that summarize the issued, paragraph [0065]: “...The audio/visual information analyzer 750 receives on-the-fly observations from sensors located in the wearable 230/device 240 and analyze the received signals. The analysis may be directed to the performance features such as the hand positions and movements, and/or the sound yielded from the play of the skill learner. The analyzed signals may then be sent to the discrepancy identifier 760, that may compare the performance features extracted from the observations with what is the expected performance features specified in the expectation log 735. Such identified discrepancies may then be used as the basis for the adaptive tutoring plan generator 770 to derive a revised tutoring plan that may be considered as appropriate based on the observations. For example, if it is consistently observed that the skill learner's hand positions deviate too much from what were instructed, the adaptive tutoring plan generator 770 may adjust the plan to stop the continuous playing and focus on more static teaching of hand positions. If the skill leaner's playing speed is consistently lagging behind the expected speed, the adaptive tutoring plan generator 770 may adjust the required speed of the hand movements to slow down until the skill learner becomes familiar with the piece. In some embodiments, based on the observations, the adaptive tutoring plan generator 770 may also generate oral communication content that summarize the issues (e.g., the sound of a certain finger is always too weak, the hands are too far away from the black keys so that the sounds coming from such playing is not loud enough, fingers need to be arched more to produce music notes with more clarity) observed and remind the skill learner to pay attention to the identified issues.”). Regarding the BRI of the claims, Examiner notes that claims 47 and 60 is a method claim and recites “a second machine learning module that receives, as input, at least a portion of the one or more measures of proficiency, and generates, as output, one or more text strings, each providing a human language explanation of a particular content area....”, language directed to not positively recited method steps. The motivation for combining the references remain unaltered from the motivation described above in conjunction with the rejection of the independent claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Patent Literature Silverstein (US 10,854,180 B2) discloses method of and system for controlling the qualities of musical energy embodied in and expressed by digital music to be automatically composed and generated by an automated music composition and generation engine, including an AI-based system receiving musical signals from its surrounding instruments and musicians, buffering and analyzing these instruments and, in response thereto, composing and generating music in real-time that will augment the music being played by the band of musicians. Lee et al. (WO 2011002731 A2) disclose music instruction system, including determining an extent to which the user-performed musical events have been correctly or incorrectly performed; providing real-time or near real-time audio feedback and/or visual feedback indicating the extent to which the user-performed musical events have been correctly or incorrectly performed. Medeot et al. (US 2021/0049990 A1) disclose a method of generating music data, including a machine learning (ML)-based structure generator for generating a piece of music. Livne et al. (US 2023/0245586 A1) disclose device, system and method for providing a singing teaching and/or vocal training lesson, including automatically, or semi-automatically prioritize the feedback to the user, based on the user's performance. Chantzis et al. (US 6,417,435 B2) disclose audio-acoustic proficiency testing device, including measurement and evaluation of audio-acoustic performances. Kim et al. (US 9,672,799 B1) disclose music practice feedback system, method, and recording medium, including providing a music practice feedback system that can identify difficult regions of sheet music based on a collection of information such that the users can focus their practice on these difficult areas without the need for a teacher. Non-Patent Literature Fiebrink et al. "The Machine Learning Algorithm as Creative Musical Tool," CoRR abs/1611.00379. (NPL 2016, listed in PTO-892 as page 1, reference "V") disclose The Machine Learning Algorithm as Creative Musical Tool, including matching approaches to machine learning to different types of musical goals. P. Seshadri and A. Lerch (NPL 2021, listed in PTO-892 as page 1, reference "W") disclose Improving Music Performance Assessment with Contrastive Learning, including the use of contrastive learning to improve the performance of MPA systems, including a deep neural network taking an input audio recording of a musical performance and estimating a numerical rating consistent with that of a professional judge. Dorfer et al. (NPL 2018, listed in PTO-892 as page 1, reference "X") disclose Learning to Listen, Read, and Follow: Score Following as a Reinforcement Learning Game, including multimodal RL agents that simultaneously learn to listen to music, read the scores from images of sheet music, and follow the audio along in the sheet, in an end-to-end fashion. Pati et al. (NPL 2018, listed in PTO-892 as page 2, reference "U") disclose Assessment of Student Music Performances Using Deep Neural Networks, including improving the reliability and consistency of music performance assessments by providing reliable and reproducible feedback to students utilizing deep neural networks (DNNs) to learn better feature representations. K. Apaydinli (NPL 2020, listed in PTO-892 as page 2, reference "V") disclose Intelligent Tutoring Systems in Music Education, including an overview of intelligent tutoring systems in music education. Lerch et al. (NPL 2019, listed in PTO-892 as page 2, reference "W") disclose Music Performance Analysis: A Survey, including ML-based approaches for tasks such as composer classification, discovery of performance rules, or modeling performance characteristics. L. H. Lee (NPL 2022, listed in PTO-892 as page 2, reference "X") disclose Musical Score Following and Audio Alignment, including audio-to-symbolic matching between an audio waveform and sheet music. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDUARDO D CASTILHO whose telephone number is (571)270-1592. The examiner can normally be reached Mon-Fri 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patrick McAtee can be reached at (571) 272-7575. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDUARDO CASTILHO/Primary Examiner, Art Unit 3698
Read full office action

Prosecution Timeline

Sep 14, 2023
Application Filed
Jan 16, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602699
Method for authenticating and anti-counterfeiting coffee machines or coffee grinders
2y 5m to grant Granted Apr 14, 2026
Patent 12567076
ELECTRONIC PAYMENT NETWORK SECURITY
2y 5m to grant Granted Mar 03, 2026
Patent 12561690
CONTACTLESS ACCESS TO SERVICE DEVICES TO FACILITATE SECURE TRANSACTIONS
2y 5m to grant Granted Feb 24, 2026
Patent 12536526
PAPERLESS TICKET MANAGEMENT SERVICE
2y 5m to grant Granted Jan 27, 2026
Patent 12536538
Method and System for Payment Device-Based Access
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
47%
Grant Probability
69%
With Interview (+22.1%)
3y 9m
Median Time to Grant
Low
PTA Risk
Based on 289 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month