Prosecution Insights
Last updated: April 19, 2026
Application No. 18/012,116

GENERATING MUSIC OUT OF A DATABASE OF SETS OF NOTES

Non-Final OA §102§103§112
Filed
Dec 21, 2022
Examiner
SCOLES, PHILIP GRANT
Art Unit
2837
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Independent Digital Sp Z O O [Pl]/[Pl]
OA Round
1 (Non-Final)
56%
Grant Probability
Moderate
1-2
OA Rounds
3y 10m
To Grant
77%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
30 granted / 54 resolved
-12.4% vs TC avg
Strong +21% interview lift
Without
With
+21.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
36 currently pending
Career history
90
Total Applications
across all art units

Statute-Specific Performance

§101
1.6%
-38.4% vs TC avg
§103
53.3%
+13.3% vs TC avg
§102
22.0%
-18.0% vs TC avg
§112
20.2%
-19.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 54 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/21/2022 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claim 1 is objected to because of the following informality: The claim recites a string of steps without indenting the steps. Where a claim sets forth a plurality of elements or steps, each element or step of the claim should be separated by a line indentation. See MPEP § 608.01(m) and 37 CFR § 1.75(i). Appropriate correction is required. Claim 4 is objected to because of the following informalities: In lines 2-3, “a digital score" should read, "the digital score” In line 3, “desired parameters” should read, “set parameters” Appropriate correction is required. Claim 6 is objected to because of the following informality: in lines 3-4, “the beginning” should read, “its beginning.” Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-6 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention. Regarding claim 1, the phrase "scope of content modulation" in line 4 is not described in the specification in such a way as to enable one skilled in the art to make or use the invention. Claims 2-6 are likewise rejected for depending, directly or indirectly, from claim 1. Regarding claim 1, the phrase "artistic level" in line 6 is not described in the specification in such a way as to enable one skilled in the art to make or use the invention. Claims 2-6 are likewise rejected for depending, directly or indirectly, from claim 1. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-6 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites a long run-on sentence improperly separated by commas, thereby creating unclear relationships between various recited limitations and elements. Claims 2-6 are likewise rejected for depending, directly or indirectly, from claim 1. Claim 1 recites the phrase, "modification algorithms music input files" in lines 2-3. It is unknown what is meant by this phrase. In the interest of advancing prosecution, this phrase will be interpreted as "modification algorithms for music input files." Claims 2-6 are likewise rejected for depending, directly or indirectly, from claim 1. Regarding claim 1, the phrase, "characteristics such as" in line 3 renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d). In the interest of advancing prosecution, this phrase will be interpreted as "characteristics including." Claims 2-6 are likewise rejected for depending, directly or indirectly, from claim 1. Claim 1 recites the limitation, "mood of the composition” in line 3. There is insufficient antecedent basis for this limitation in the claim. Because “a composition” is recited later in claim 1, this limitation will be interpreted as, “mood,” consistent with the other items in the same list. “Mood of a composition” would also be acceptable to establish antecedent basis of “a composition” in this limitation rather than later in the same claim. Claims 2-6 are likewise rejected for depending, directly or indirectly, from claim 1. Claim 1 recites the phrase, "duration and the scope of content modulation is selected" in line 4. It is unknown what is meant by this phrase. In the interest of advancing prosecution, this phrase will be interpreted as " duration and the scope of content modulation." Claims 2-6 are likewise rejected for depending, directly or indirectly, from claim 1. Claim 1 recites the limitation, "the effect being" in line 4. There is insufficient antecedent basis for this limitation in the claim. In the interest of advancing prosecution, this limitation will be interpreted as, “an effect being." Claims 2-6 are likewise rejected for depending, directly or indirectly, from claim 1. Claim 1 recites the limitation, "the intended artistic expression" in line 5. There is insufficient antecedent basis for this limitation in the claim. In the interest of advancing prosecution, this limitation will be interpreted as, “an intended artistic expression." Claims 2-6 are likewise rejected for depending, directly or indirectly, from claim 1. Claim 1 recites the limitation, "on the technical level" in line 6. There is insufficient antecedent basis for this limitation in the claim. In the interest of advancing prosecution, this limitation will be interpreted as, “on a technical level." Claims 2-6 are likewise rejected for depending, directly or indirectly, from claim 1. Claim 1 recites the phrase, "on the artistic level" in line 6. It is unknown what is meant by this phrase in this claim. Claims 2-6 are likewise rejected for depending, directly or indirectly, from claim 1. Claim 1 recites the limitation, "the artistic level" in line 6. There is insufficient antecedent basis for this limitation in the claim. In the interest of advancing prosecution, this limitation will be interpreted as, “an artistic level." Claims 2-6 are likewise rejected for depending, directly or indirectly, from claim 1. Claim 1 recites the phrase, "the input music contents for" in line 7. It is unknown what is meant by this phrase in this claim. In the interest of advancing prosecution, this phrase will be interpreted as "analyze the input music contents for." Claims 2-6 are likewise rejected for depending, directly or indirectly, from claim 1. Claim 1 recites the limitation, "the presence of patterns" in lines 7-8. There is insufficient antecedent basis for this limitation in the claim. In the interest of advancing prosecution, this limitation will be interpreted as, “a presence of patterns." Claims 2-6 are likewise rejected for depending, directly or indirectly, from claim 1. Claim 1 recites the limitation, "music compositions of the given type" in line 9. There is insufficient antecedent basis for this limitation in the claim. In the interest of advancing prosecution, this limitation will be interpreted as, "music compositions of a given type." Claims 2-6 are likewise rejected for depending, directly or indirectly, from claim 1. Claim 1 recites the limitation, "the part of the given instrument" in line 10. There is insufficient antecedent basis for this limitation in the claim. In the interest of advancing prosecution, this limitation will be interpreted as, "a part of a given instrument." Claims 2-6 are likewise rejected for depending, directly or indirectly, from claim 1. Claim 1 recites the limitation, "the final version of the record" in lines 15-16. There is insufficient antecedent basis for this limitation in the claim. In the interest of advancing prosecution, this limitation will be interpreted as, "a final version of the record." Claims 2-6 are likewise rejected for depending, directly or indirectly, from claim 1. Claim 3 recites the limitation, "the repository" in line 3. There is insufficient antecedent basis for this limitation in the claim. In the interest of advancing prosecution, this limitation will be interpreted as, "a repository." Claim 5 recites the limitation, "the repository" in line 3. There is insufficient antecedent basis for this limitation in the claim. In the interest of advancing prosecution, this limitation will be interpreted as, "a repository." Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless –(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1 and 4-6 are rejected under 35 U.S.C. 102(a)(1) as anticipated by Silverstein (US 20190237051 A1, 08/01/2019), hereinafter Silverstein, to the extent understood. Regarding claim 1, Silverstein discloses a method of generating music contents according to the invention, wherein input sound samples are processed (Silverstein ¶0450: "during the session system receives audio and/or MIDI data signals produced from the group of instruments during the session, and analyzes these signals for pitch data and melodic structure") according to modification algorithms for music input files (Silverstein ¶0866: "The analyzer then determines how best to modify a musical piece's rhythmic, harmonic, and other values based on these inputs and analyses."), related, in particular, to characteristics such as tempo, mood, music genre, duration and a scope of content modulation (Silverstein ¶0040: "musical experience descriptors (MXDs) such as emotion/mood and style/genre type musical experience descriptors (MXDs), timing parameters, and other musical energy (ME) quality control parameters (e.g. instrumentation, ensemble, volume, tempo, rhythm, harmony, and timing (e.g. start/hit/stop) and framing (e.g. intro, climax, outro or ICO) control parameters)"), with an effect being a composition with an intended artistic expression (This limitation is directed to an intended result, and is therefore non-limiting. MPEP §§ 2111.04(I) and 2173.05(g). Notwithstanding, Silverstein ¶0864 states: "The primary purpose of the Feedback Subsystem B42 is to accept user and/or computer feedback to improve, on a real-time or quasi-real-time basis, the quality, accuracy, musicality, and other elements of the musical pieces that are automatically created by the system using the music composition automation technology of the present invention."), wherein music contents are created on a technical level and on an artistic level (It is unknown what is meant by "artistic level," and "artistic level" is not described in the specification in such a way as to enable one skilled in the art to make or use the invention. Nevertheless, Silverstein ¶0866 states: "In general, the Piece Feedback Analyzer considers all available input, including, but not limited to, autonomous or artificially intelligent measures of quality and accuracy and human or human-assisted measures of quality and accuracy, and determines a suitable response to a analyzed piece of composed music."), wherein on the level of contents creation on the technical level (Silverstein ¶0451: "As shown, the Engine E1 comprises: a user GUI-Based Input Subsystem A0, a General Rhythm Subsystem A1, a General Pitch Generation Subsystem A2, a Melody Rhythm Generation Subsystem A3, a Melody Pitch Generation Subsystem A4, an Orchestration Subsystem A5, a Controller Code Creation Subsystem A6, a Digital Piece Creation Subsystem A7, and a Feedback and Learning Subsystem A8 configured as shown."), analyze the input music contents for a presence of patterns (Silverstein ¶0884: "The Musical Kernel (DNA) Subsystem B45 analyzes, extracts, and saves the elements of a piece of music that might distinguish it from any other piece of music. Musical Kernel (DNA) Generation Subsystem B45 performs its functions using a (musical) DNA Analyzer which accepts as inputs all elements of the musical piece and uses a music theoretic basis and filter to determine its output, which is an organizational set of all events deemed important to the DNA of a musical piece. Using this input data, the DNA Analyzer identifies and isolates specific rhythmic, harmonic, timbre-related, or other musical events that, either independently or in concert with other events, play a significant role in the musical piece."), the patterns are saved (Silverstein ¶0886: "For example, the Subsystem B45 may save the melody and all related melodic and rhythmic material, of a musical piece") in a database of business rules and music composing rules (Silverstein ¶0879: "The Preference Saver Subsystem B44 modifies and/or changes, and then saves the altered probability-based parameter tables, logic order, and/or other elements used within the system, and distributes this data to the subsystems of the system, in order or to better reflect the preferences of a system user.") used to develop generation models of music compositions (Silverstein ¶0879: "This allows the piece to be regenerated following the desired changes and to allow the subsystems to adjust the data sets, data tables, and other information to more accurately reflect the user's musical and non-musical preferences moving forward.") of a given type (Silverstein ¶0886: "For example, the Subsystem B45 may save the melody and all related melodic and rhythmic material, of a musical piece so that a user may create a new piece with the saved melody at a later time."), next a melody generator is created (Silverstein ¶0451: "As shown, the Engine E1 comprises: a user GUI-Based Input Subsystem A0, a General Rhythm Subsystem A1, a General Pitch Generation Subsystem A2, a Melody Rhythm Generation Subsystem A3, a Melody Pitch Generation Subsystem A4"), in which a digital score of the part (Silverstein ¶0145: "Yet, another object of the present invention is to provide novel automated music composition and generation systems for generating musical score representations of automatically composed pieces of music") of a given instrument is created (Silverstein ¶0104: "Another object of the present invention is to provide such an Automated Music Composition and Generation System, wherein an Instrument Selector Subsystem (B39) is used in the Automated Music Composition and Generation Engine, wherein piece instrument selections are determined using the probability-based instrument selection tables, and used during the automated music composition and generation process of the present invention."), wherein a database of atomic sounds is created (Silverstein ¶0855: "In particular, each audio sample contains a single note, or a chord, or a predefined set of notes. Each note, chord and/or predefined set of notes is recorded at a wide range of different volumes, different velocities, different articulations, and different effects, etc. so that a natural recording of every possible use case is captured and available in the sampled instrument library.") simultaneously (Silverstein ¶0855: "For example, on an acoustical piano with 88 keys (i.e. notes), it is not unexpected to have over 10,000 separate digital audio samples which, taken together, constitute the fully digitally-sampled piano instrument. During music production, these digitally sampled notes are accessed in real-time to generate the music composed by the system."), and next the music contents are sent to the generator (Silverstein ¶0450: "during the session, the system automatically generates musical descriptors from abstracted pitch and melody data, and uses the musical experience descriptors to compose music for the session on a real-time basis, and (v) in the event that the PERFORM mode has been selected, the system generates the composed music"), in which parameters are set using a controller conforming to the MIDI standard (Silverstein ¶0847 discloses setting MIDI controller codes: "As shown in FIG. 27LL, the Controller Code Generation Subsystem B32 is supported by the controller code parameter tables shown in FIG. 28S, and parameter selection mechanisms (e.g. random number generator, or lyrical-input based parameter selector) described in detail hereinabove. The form of controller code data is typically given on a scale of 0-127. Volume (CC 7) of 0 means that there is minimum volume, whereas volume of 127 means that there is maximum volume. Pan (CC 10) of 0 means that the signal is panned hard left, 64 means center, and 127 means hard right.") and subjected to automatic generation of a digital score of the composition (Silverstein ¶¶0850-0852: "The Controller Code Generation Subsystem B32 uses the instrument, instrument group and piece-wide controller code parameter tables and data sets loaded from subsystems… The controller code selected for the instrumentation of the musical piece will be used during the automated music composition and generation process of the present invention as described hereinbelow.) and parts for individual instruments are created (Silverstein ¶0859: "The Digital Audio Sample Retriever Subsystem B33 retrieves the individual digital audio samples that are called for in the orchestrated piece of music that has been composed by the system.") and then rendered to music tracks for each of the instruments (Silverstein ¶0860: "In short, the digital audio sample organizer subsystem B34 determines the correct placement in time and space of each audio file in a musical piece."), followed by mixing of individual tracks into a record (Silverstein ¶0861: "The Piece Consolidator Subsystem B35 collects the digital audio samples from an organized collection of individual audio files obtained from subsystem B34, and consolidates or combines these digital audio files into one or more than one digital audio file(s) that contain the same or greater amount of information.") and a final version of the record is obtained (Silverstein ¶0863: "The Piece Deliverer Subsystem B36 transmits the formatted digital audio file(s) from the system to the system user (either human or computer) requesting the information and/or file(s), typically through the system interface subsystem B0"), with the composition and its record then verified by an Al (Silverstein ¶0866: "In general, the Piece Feedback Analyzer considers all available input, including, but not limited to, autonomous or artificially intelligent measures of quality and accuracy.") critic module (Silverstein ¶0867: "As shown in FIG. 27QQ1, the Feedback Subsystem B41 performs Autonomous Confirmation Analysis. Autonomous Confirmation Analysis is a quality assurance/self-checking process, whereby the system examines the piece of music that was created, compares it against the original system inputs, and confirms that all attributes of the piece that was requested have been successfully created and delivered and that the resultant piece is unique."). Regarding claim 4, Silverstein discloses a method of generating music contents comprising the features of claim 1 as discussed above. Silverstein further discloses that the developed models are sent to be read (Silverstein ¶0620: "the Parameter Transformation Engine Subsystem B51 then generates an appropriate set of probability-based parameter programming tables for subsequent distribution and loading into the various subsystems across the system, for use in the automated music composition and generation process being prepared for execution.") and the digital score of the composition with the set parameters is generated (Silverstein ¶0816: "From the composed piece of music, typically represented with a lead sheet (or similar) representation as shown by the musical score representation at the bottom of FIG. 27JJ1, and also at the top of FIG. 27KK6, the Orchestration Generation Subsystem B31 determines what music (i.e. set of notes or pitches) will be played by the selected instruments, derived from the piece of music that has been composed thus far automatically by the automated music composition process.") automatically (Silverstein ¶0058: "(iii) COMPOSE mode, where the system automatically composes music based on the music it receives and analyzes from the musical instruments in its (local or remote) environment during the musical session, and (iv) PERFORM mode, where the system autonomously performs automatically composed music, in real-time, in response to the musical information received and analyzed from its environment during the musical session."). Regarding claim 5, Silverstein discloses a method of generating music contents comprising the features of claim 1 as discussed above. Silverstein further discloses that sound tracks of instruments are rendered using resources from a repository (¶0853: "The Automatic Music Composition And Generation (i.e. Production) System of the present invention described herein utilizes libraries of digitally-synthesized (i.e. virtual) musical instruments, or virtual-instruments, to produce digital audio samples of individual notes specified in the musical score representation for each piece of composed music."). Regarding claim 6, Silverstein discloses a method of generating music contents comprising the features of claim 1 as discussed above. Silverstein further discloses that the composition and its record are verified using artificial intelligence algorithms (Silverstein ¶0866: "In general, the Piece Feedback Analyzer considers all available input, including, but not limited to, autonomous or artificially intelligent measures of quality and accuracy.") and the process of generating music contents is repeated from its beginning if the record does not pass verification (Silverstein ¶0869: "As indicated in FIGS. 27QQ1, 27QQ2 and 27QQ3, if musical piece uniqueness is not successfully confirmed, then the feedback subsystem B42 modifies the inputted musical experience descriptors and/or subsystem music-theoretic parameters, and then restarts the automated music composition and generation process to recreate the piece of music."). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 2 and 3 are rejected under 35 U.S.C. 103 as unpatentable over Silverstein in view of Kolen et al. (US 20200306641 A1, filed 03/26/2019), hereinafter Kolen, to the extent understood. Regarding claim 2, Silverstein discloses a method of generating music contents comprising the features of claim 1 as discussed above. Silverstein does not explicitly disclose that the final music record is created using artificial intelligence algorithms at the stage of analysis for the presence of existing patterns, composition generation models are developed and the generator preparing sound is created. However, Kolen suggests that the final music record is created using artificial intelligence algorithms at the stage of analysis for the presence of existing patterns (Kolen ¶0012: "To generate the music, the system may access musical cues for the particular electronic game… These musical cues may be created by a game composer or by a machine learning model. The system may generate music based on musical cues. For example, if a musical cue indicates a particular melody, then the system may generate music based on the melody and a genre preference for a user. As another example, if a musical cue indicates a particular emotion (e.g., happiness), then the system may generate music based on the emotion and genre preference. Therefore, the generated music may be based on specific musical cues, but adjusted for each user based on the user's empirically determined genre preferences."), composition generation models are developed (Kolen ¶0071: "For example, during training the neural network 222 may learn a meaning associated with input styles. As described above, style or genre may be encoded. In some embodiments, the neural network 222 may learn a style embedding (e.g., as a contextual input, which may be connected with each layer).") and the generator preparing sound is created (Kolen ¶0072: "As illustrated in FIG. 2, the audio waveform 144B may be utilized as an input to a different artificial neural network 224. For example, a convolutional autoencoder may be utilized... The convolutional autoencoder may be trained utilizing multiple autoencoder pathways, for example one per style or genre. Subsequent to training, the audio waveform 144B may be analyzed by the network 224 in view of the style preference 212. The network 224 may then generate personalized music 102 in accordance with the style preference."). It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the method of generating music contents of Silverstein by adding the artificial intelligence algorithms of Kolen to generate personalized music in accordance with the style preference (Kolen ¶0072). Regarding claim 3, Silverstein discloses a method of generating music contents comprising the features of claim 1 as discussed above. Silverstein further teaches that sound samples are created and contents are saved in the repository (Silverstein ¶0855: "The Digital Audio Sampling Synthesis Method involves recording a sound source (such as a real instrument or other audio event) and organizing these samples in an intelligent manner for use in the system of the present invention. In particular, each audio sample contains a single note, or a chord, or a predefined set of notes. Each note, chord and/or predefined set of notes is recorded at a wide range of different volumes, different velocities, different articulations, and different effects, etc. so that a natural recording of every possible use case is captured and available in the sampled instrument library."). Silverstein does not explicitly disclose that the operations are conducted in parallel. However, Kolen suggests that the operations may be conducted in parallel (Kolen ¶0103: "Moreover, in certain embodiments, acts or events can be performed concurrently, for example, through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially. In addition, different tasks or processes can be performed by different machines and/or computing systems that can function together."). It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the method of generating music contents of Silverstein by adding the parallel processing of Kolen to perform processing operations in a high-speed manner (Silverstein ¶0653). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHILIP SCOLES whose telephone number is (703)756-1831. The examiner can normally be reached Monday-Friday 8:30-4:30 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dedei Hammond can be reached on 571-270-7938. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHILIP G SCOLES/ Examiner, Art Unit 2837 /JEFFREY DONELS/Primary Examiner, Art Unit 2837
Read full office action

Prosecution Timeline

Dec 21, 2022
Application Filed
Jan 24, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603073
ELECTRONIC PERCUSSION INSTRUMENT, CONTROL DEVICE FOR ELECTRONIC PERCUSSION INSTRUMENT, AND CONTROL METHOD THEREFOR
2y 5m to grant Granted Apr 14, 2026
Patent 12597405
AUTO-RECORDING FOR MUSICAL INSTRUMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12597406
ELECTRONIC CYMBAL AND STRIKING DETECTION METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12586552
MULTI-LEVEL AUDIO SEGMENTATION USING DEEP EMBEDDINGS
2y 5m to grant Granted Mar 24, 2026
Patent 12579962
DEVICE AND ELECTRONIC MUSICAL INSTRUMENT
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
56%
Grant Probability
77%
With Interview (+21.3%)
3y 10m
Median Time to Grant
Low
PTA Risk
Based on 54 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month