Prosecution Insights
Last updated: April 17, 2026
Application No. 17/636,458

SYSTEM AND METHOD FOR DISTRIBUTED MUSICIAN SYNCHRONIZED PERFORMANCES

Final Rejection §103
Filed
Feb 18, 2022
Examiner
SCOLES, PHILIP GRANT
Art Unit
2837
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
unknown
OA Round
2 (Final)
56%
Grant Probability
Moderate
3-4
OA Rounds
3y 10m
To Grant
77%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
30 granted / 54 resolved
-12.4% vs TC avg
Strong +21% interview lift
Without
With
+21.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
36 currently pending
Career history
90
Total Applications
across all art units

Statute-Specific Performance

§101
1.6%
-38.4% vs TC avg
§103
53.3%
+13.3% vs TC avg
§102
22.0%
-18.0% vs TC avg
§112
20.2%
-19.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 54 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Claims 1-13, 15-17, and 19-22 are pending and have been considered below. Claims 14 and 18 are cancelled by applicant. Applicant’s arguments, see page 8, lines 1-11, filed 5/16/2025, with respect to claims 4, 6, and 20, have been fully considered and are persuasive. In response to Applicant’s amendments, the objections and 112(b) rejection have been withdrawn. Applicant’s arguments, see page 8, line 12 – page 12, line 2, filed 5/16/2025, with respect to claims 1-13, 15-17, and 19-22, have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-4, 6, 10, and 19-22 are rejected under 35 U.S.C. 103 as unpatentable over Steinwedel et al. (WIPO Patent Application No. PCT/US2019/040113, filed July 1, 2019, published as WIPO Patent Publication No. 2020006556 A1, January 2, 2020), hereinafter Steinwedel, in view of Redmann (U.S. Patent Publication No. 20070039449 A1, February 22, 2007), hereinafter Redmann. Regarding claim 1, Steinwedel teaches a computerized method (Steinwedel p. 25, lines 22-25: "FIG. 17 illustrates respective instances (501 and 520) of a portable computing device such as mobile device 400 programmed with vocal audio and video capture code, user interface code, pitch correction code, an audio rendering pipeline and playback code in accord with the functional descriptions herein.") that enables an interactive session between a plurality of geographically distributed musicians (Steinwedel p. 21, lines 32-33: "FIGs. 14 and 15 illustrate exemplary techniques for capture, coordination and/or mixing of audiovisual content for geographically distributed performers"), comprising: specifying song arrangements for the interactive session as a sequence of song parts (Steinwedel p. 20, lines 9-11: "the media segment capture and edit platform may be extended to allow a user (user A, B, C or still another user D) to designate things like: song part (“Chorus”, “Verse”, etc.)"; Steinwedel p. 21, lines 21-27: "Portions of a performance timeline (often portions that correspond to musical sections) may be marked and labelled… by a user… or labelled by a machine learning robot trained to identify section and boundaries") to be played or sung by each of the participating plurality of geographically distributed musicians (Steinwedel p. 20, lines 25-29: "Pre-marked portions (here musical sections) of an audio or audiovisual work 1301 may be selected by the seeding user. The resulting short seed 1311 constitutes the seed for multiple collaborations (here collabs #1 and #2). Whatever its extent or scope, the seed or seed portion delimits the collaboration request (or call) for others to join."); automatically detecting each musician performance of each of the participating plurality of geographically distributed musicians on at least one instrument track to define a detected musician performance (Steinwedel p. 4, lines 6-7: "vocal audio can be pitch-corrected in real-time at the mobile device." Real-time pitch correction at the mobile device necessarily comprises automatic detection of a musician's performance at each mobile (geographically distributed) device, and the human voice falls within the definition of a musical "instrument."); automatically detecting musician audio and video for the detected musician performance on any song part that is automatically captured (Steinwedel p. 2, lines 5-7: "the vocal performances of individual users are captured (together with performance synchronized video) on mobile device." The audiovisual stream can be automatically detected as described above) with reference to the timing for that part to define a captured musician performance (Steinwedel p. 3, lines 24-26: "collaboration features may be provided to allow users to contribute media content and/or other temporally synchronized information to an evolving performance timeline"); transmitting the captured musician performances to at least one of the plurality of geographically distributed musicians participating in a same session (Steinwedel p. 3, lines 26-28: "To facilitate collaboration and/or accretion of content, a shared service platform may expose media content and performance timeline data as a multi-user concurrent access database"); and wherein all received performances from other musicians of the plurality of geographically distributed musicians are played in accordance with the current specified arrangement of song parts to produce the effect of playing with other musicians live in the interactive session (Steinwedel p. 2, lines 21-26: "A seeding user's call invites other users to join the full-length or short form seed by singing along, singing a particular vocal part or musical section, singing harmony or other duet part, rapping, talking, clapping, recording video, adding a video clip from camera roll, etc. The resulting group performance, whether full-length or just a chunk, may be posted, livestreamed") for lock-free peer-to-peer data synchronization (Steinwedel p. 13, lines 13-17: "A content selection and performance accretion module 112 of content server 110 performs audio mixing and video stitching in the illustrated design, while audiovisual render / stream control module 113 supplies group audiovisual performance mix 111 to a downstream audience. In other embodiments, peer-to-peer communications may be employed for at least some of the illustrated flows."). Steinwedel does not explicitly disclose a participating plurality of geographically distributed musicians in a same session yet subject to transmission latency, wherein all received performances from other musicians of the plurality of geographically distributed musicians are played interactively for one of the plurality of geographically distributed musicians without the transmission latency. However, Redmann teaches a participating plurality of geographically distributed musicians in a same session (Redmann abstract: "An improved method and apparatus are disclosed to permit real time, distributed performance by multiple musicians at remote locations, and for recording that collaboration.") yet subject to transmission latency (Redmann abstract: "The latency of the communication channel is transferred to the behavior of the local instrument so that a natural accommodation is made by the musician."), wherein all received performances from other musicians of the plurality of geographically distributed musicians are played interactively for one of the plurality of geographically distributed musicians without the transmission latency (Redmann abstract: "This allows musical events that actually occur simultaneously at remote locations to be played together at each location, though not necessarily simultaneously at all locations."). It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the computerized method of Steinwedel by adding the interactivity for one of the plurality of geographically distributed musicians without transmission latency to permit locations having low latency connections to retain some of their advantage (Redmann abstract). Regarding claim 2, Steinwedel (in view of Redmann) teaches a computerized method that enables an interactive session between a plurality of geographically distributed musicians comprising the features of claim 1 as discussed above. Steinwedel further teaches that the musician performances are continuously updated to the latest available recordings for each instrument track and song part (Steinwedel p. 3, lines 26-28: "To facilitate collaboration and/or accretion of content, a shared service platform may expose media content and performance timeline data as a multi-user concurrent access database.” Accretion of content to a multi-user concurrent access database comprises continuously updating to the latest available recordings for each instrument track and song part). Regarding claim 3, Steinwedel (in view of Redmann) teaches a computerized method that enables an interactive session between a plurality of geographically distributed musicians comprising the features of claim 1 as discussed above. Steinwedel further discloses that the musician performances are automatically updated to the latest available recordings for each instrument track and song part (Steinwedel p. 3, lines 26-28: "To facilitate collaboration and/or accretion of content, a shared service platform may expose media content and performance timeline data as a multi-user concurrent access database.” Accretion of content to a multi-user concurrent access database comprises automatically updating to the latest available recordings for each instrument track and song part). Regarding claim 4, Steinwedel (in view of Redmann) teaches a computerized method that enables an interactive session between a plurality of geographically distributed musicians comprising the features of claim 1 as discussed above. Steinwedel further teaches that the specified arrangement of the song includes at least one of: playing a song from beginning to end (Steinwedel p. 2, lines 14-15: "a seed may be a full-length seed spanning much or all of a pre-existing audio (or audiovisual) work"), playing a subsection of a song repeatedly, playing a dynamically modified arrangement, or playing along to a system generated arrangement. Regarding claim 6, Steinwedel (in view of Redmann) teaches a computerized method that enables an interactive session between a plurality of geographically distributed musicians comprising the features of claim 1 as discussed above. Steinwedel further teaches that the interactive session includes at least one instance where a plurality of geographically isolated musicians are: connected peer-to-peer (Steinwedel p. 13, lines 16-17: "peer-to-peer communications may be employed for at least some of the illustrated flows"), upload/download where the musicians are individually connected to a central server, or offline where the musicians are not connected. Regarding claim 10, Steinwedel (in view of Redmann) teaches a computerized method that enables an interactive session between a plurality of geographically distributed musicians comprising the features of claim 1 as discussed above. Steinwedel further teaches that the instrument track is a user selected group of audio and video inputs used to record a specific musical instrument for one or more song parts within a musical composition (Steinwedel p. 2, lines 14-18: "a seed may be a full-length seed spanning much or all of a pre-existing audio (or audiovisual) work and mixing, to seed further the contributions of one or more joiners, a user’s captured media content for at least some portions of the audio (or audiovisual) work. In some cases, a short seed may be employed spanning less than all (and in some cases, much less than all) of the audio (or audiovisual) work." Audio content may comprise a specific musical instrument). Regarding claim 19, Steinwedel (in view of Redmann) teaches a computerized method that enables an interactive session between a plurality of geographically distributed musicians comprising the features of claim 1 as discussed above. Steinwedel further teaches automatically transforming a song part recording to a new configuration while maintaining with the new configuration aligning with any other song part configuration in a lock free manner (Steinwedel p. 3, lines 26-28: "To facilitate collaboration and/or accretion of content, a shared service platform may expose media content and performance timeline data as a multi-user concurrent access database." Accretion of content comprises automatically transforming, and multi-user concurrent access comprises a lock-free configuration.). Regarding claim 20, Steinwedel (in view of Redmann) teaches a computerized method that enables an interactive session between a plurality of geographically distributed musicians comprising the features of claim 1 as discussed above. Steinwedel further teaches that the song parts comprise at least one base song part (Steinwedel p. 20, lines 19-20: "A seed may be a full-length seed spanning much or all of a pre-existing audio (or audiovisual) work") and at least one derived song part (Steinwedel p. 20, lines 22-23: "a short seed may be employed that spans less than all (and in some cases, much less than all) of the audio (or audiovisual) work." The definition of a "derived song part" according to ¶0033 of the instant specification can comprise changing body measure length). Regarding claim 21, Steinwedel (in view of Redmann) teaches a computerized method that enables an interactive session between a plurality of geographically distributed musicians comprising the features of claim 1 as discussed above. Steinwedel further teaches a computer-implemented system (Steinwedel p. 25, lines 22-25: "FIG. 17 illustrates respective instances (501 and 520) of a portable computing device such as mobile device 400 programmed with vocal audio and video capture code, user interface code, pitch correction code, an audio rendering pipeline and playback code in accord with the functional descriptions herein."), comprising: one or more processors (Steinwedel p. 1, line 15: "include powerful media processors"); and one or more non-transitory computer-readable storage mediums (Steinwedel p. 26, line 22: "non-transitory storage") containing instructions configured to cause the one or more processors to perform operations (Steinwedel p. 26, lines 27-28: "suitable for storing electronic instructions, operation sequences, functionally descriptive information encodings, etc."). Regarding claim 22, Steinwedel (in view of Redmann) teaches a computerized method that enables an interactive session between a plurality of geographically distributed musicians comprising the features of claim 1 as discussed above. Steinwedel further teaches a computer program product stored on a non-transitory computer- readable storage medium (Steinwedel p. 26, lines 14-15: "a computer program product encoded in a machine-readable medium as instruction sequences and other functional constructs of software") comprising: computer-executable instructions causing a processor to perform operations (Steinwedel p. 26, lines 15-16: "instruction sequences and other functional constructs of software, which may in tum be executed in a computational system"). Claims 5, 13, 15, and 16 are rejected under 35 U.S.C. 103 as unpatentable over Steinwedel in view of Redmann and further in view of Helms et al. (United States Patent Publication No. 20130238999 A1, September 12, 2013), hereinafter Helms. Regarding claim 5, Steinwedel (in view of Redmann) teaches a computerized method that enables an interactive session between a plurality of geographically distributed musicians comprising the features of claim 1 as discussed above. Steinwedel does not explicitly disclose that playback of a performance for the instrument track and the song part will not begin until the entire recording has been received as received performances and the interactive session song arrangement is positioned at that song part. However, Helms teaches that playback of a performance for the instrument track and the song part will not begin until the entire recording has been received as received performances (¶0024: "Once the band members of the jam session are satisfied with the result, the band leader (i.e., host) can either manually or automatically collect the recordings of each band member via the communicative coupling means (e.g., wireless coupling) and archive a complete recording of the session for subsequent playback, editing, or further jam sessions (i.e., 'sessioning')." Subsequent playback will not begin until after the band leader collects the recordings of each band member and archives a complete recording of the session) and the interactive session song arrangement is positioned at that song part (¶0024: "subsequent playback" necessarily requires positioning at that song part). It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the interactive session of Steinwedel (in view of Redmann) by adding the receiving of performances and subsequent playback of Helms to verify song uniformity (Helms ¶0024). Regarding claim 13, Steinwedel (in view of Redmann) teaches a computerized method that enables an interactive session between a plurality of geographically distributed musicians comprising the features of claim 1 as discussed above. Steinwedel further teaches that each of a plurality of music project data elements contributing to the interactive session is assigned an owner identifier (Steinwedel p. 19, lines 29-31: "Next (in the illustrated flow), a user (user A, B or another user C) may assign (STEP 3) particular lyric portions to singers (e.g., part A vs. part B in duet)." "User X" assigned to a lyric portion comprises an owner identifier). Steinwedel (in view of Redmann) does not explicitly disclose a project identifier and a generated standard universally unique identifier (UUID) as an entity identifier. However, Helms teaches a project identifier (Helms ¶0079: "It should be noted that when a Jam Session is created on a host device, the Jam Session can be assigned a Universally Unique Identifier (UUID)") and a generated standard universally unique identifier (UUID) as an entity identifier (Helms ¶0079: "It should be noted that when a Jam Session is created on a host device, the Jam Session can be assigned a Universally Unique Identifier (UUID)"). It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the interactive session of Steinwedel (in view of Redmann) by adding the UUID of Helms to trigger alerts if a user tries to change song architecture parameters while offline (Helms ¶0079). Regarding claim 15, Steinwedel (in view of Redmann and further in view of Helms) teaches a computerized method that enables an interactive session between a plurality of geographically distributed musicians comprising the features of claim 13 as discussed above. Steinwedel further teaches that the interrelationships between the music project data elements provide lock-free peer-to-peer data synchronization (Steinwedel p. 13, lines 13-17: "A content selection and performance accretion module 112 of content server 110 performs audio mixing and video stitching in the illustrated design, while audiovisual render / stream control module 113 supplies group audiovisual performance mix 111 to a downstream audience. In other embodiments, peer-to-peer communications may be employed for at least some of the illustrated flows."). Regarding claim 16, Steinwedel (in view of Redmann and further in view of Helms) teaches a computerized method that enables an interactive session between a plurality of geographically distributed musicians comprising the features of claim 13 as discussed above. Steinwedel further teaches that the interrelationships between the music project data elements provide lock-free data synchronization is between one of a client and a central server (Steinwedel p. 3, lines 26-28: "To facilitate collaboration and/or accretion of content, a shared service platform may expose media content and performance timeline data as a multi-user concurrent access database." A shared service platform with a concurrent access database comprises a central server.). Claims 7, 11, 12, and 17 are rejected under 35 U.S.C. 103 as unpatentable over Steinwedel in view of Redmann and further in view of Taub et al. (United States Patent Publication No. 20100212478 A1, August 26, 2010), hereinafter Taub. Regarding claim 7, Steinwedel (in view of Redmann) teaches a computerized method that enables an interactive session between a plurality of geographically distributed musicians comprising the features of claim 1 as discussed above. Steinwedel (in view of Redmann) does not explicitly disclose that the song part comprises musical tempo, beats per measure, and number of measures. However, Taub teaches that a song part includes musical tempo (Taub ¶0094: "the score processing unit 550 includes a tempo detection unit 552"), beats per measure (Taub ¶0094-0095: "Other embodiments of the tempo detection unit 552 further use the determined tempo to assign note values (e.g., quarter note, eighth note, etc.) to notes and rests. Meter dictates how many beats are in each measure of music, and which note value it considered a single beat."), and number of measures (Taub ¶0220: "The LCD screen 1102 allows the user to determine exactly which measures are captured." A display of exactly which measures are captured necessarily comprises a number of measures). It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the computerized method of Steinwedel by adding the tempo and rhythm analysis of Taub to generate useful information for use as music elements (Taub ¶0113). Regarding claim 11, Steinwedel (in view of Redmann) teaches a computerized method that enables an interactive session between a plurality of geographically distributed musicians comprising the features of claim 1 as discussed above. Steinwedel does not explicitly disclose an adaptive performance detection program that analyzes the instrument track to determine if the instrument track contains a user performance. However, Taub teaches an adaptive performance detection program that analyzes the instrument track to determine if the instrument track contains a user performance (Taub 0078: "the audio receiver 506 includes a threshold detection component, configured to begin receiving the music input signal 102 (e.g., start recording) on detection of audio levels exceeding certain thresholds"). It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the interactive session of Steinwedel (in view of Redmann) by adding the track analysis of Taub to avoid including unwanted noise in the track (Taub ¶0234). Regarding claim 12, Steinwedel (in view of Redmann and further in view of Taub) teaches a computerized method that enables an interactive session between a plurality of geographically distributed musicians comprising the features of claim 11 as discussed above. Taub further teaches that the adaptive performance detection program analyzes audio input by extracting audio signal features from an audio input signal (Taub 0078: "detection of audio levels exceeding certain thresholds") and then calculates a score indicating probability that the audio input signal contains a performance (Taub 0078: "the audio receiver 506 includes a threshold detection component, configured to begin receiving the music input signal 102 (e.g., start recording) on detection of audio levels exceeding certain thresholds." Exceeding or subceeding the threshold comprises a binary score indicating probability that the audio input signal contains a performance). It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the interactive session of Steinwedel by adding the track analysis of Taub to avoid including unwanted noise in the track (Taub ¶0234). Regarding claim 17, Steinwedel (in view of Redmann) teaches a computerized method that enables an interactive session between a plurality of geographically distributed musicians comprising the features of claim 1 as discussed above. Steinwedel does not explicitly disclose providing an automated looped recording session after the same session. However, Taub teaches providing an automated looped recording session after the same session (Taub ¶0166: "Additional options may allow participants to, for example, loop sections of the collaboration project"). It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the interactive session of Steinwedel (in view of Redmann) by adding the track looping of Taub to keep trying new ideas over the same passages or to practice a section of a song (Taub ¶0166). Claims 8 and 9 are rejected under 35 U.S.C. 103 as unpatentable over Steinwedel in view of Redmann and further in view of Reynolds et al. (United States Patent No. 10,056,062 B2, August 21, 2018), hereinafter Reynolds. Regarding claim 8, Steinwedel (in view of Redmann) teaches a computerized method that enables an interactive session between a plurality of geographically distributed musicians comprising the features of claim 1 as discussed above. Steinwedel (in view of Redmann) does not explicitly disclose that the song part further comprises at least one lead-in measure or at least one tail measure. However, Reynolds teaches a song part further comprising at least one lead-in measure or at least one tail measure (Reynolds col 4, lines 41-43: "two measures displayed at a time (see, e.g., FIG. 5) along with an optional leading measure and trailing measure."). It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the interactive session of Steinwedel (in view of Redmann) by adding the lead-in and tail measures of Reynolds to facilitate synchronicity in a user's performance alongside a musical track (Reynolds col 4, lines 31-33). Regarding claim 9, Steinwedel (in view of Redmann) teaches a computerized method that enables an interactive session between a plurality of geographically distributed musicians comprising the features of claim 1 as discussed above. Steinwedel further teaches mixing song parts (Steinwedel p. 4, lines 26-27: "manipulating and mixing the uploaded audiovisual content of multiple contributing vocalists"). Steinwedel not explicitly disclose that transitions between song parts within the current specified arrangement are played by mixing song part lead-in measures with preceding song part body measures, and mixing song part tail measures with following song part body measures. However, Reynolds teaches that song parts within the current specified arrangement are played with lead-in measures and song part tail measures (Reynolds col 4, lines 41-43: "two measures displayed at a time (see, e.g., FIG. 5) along with an optional leading measure and trailing measure."). It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the mixing of song parts of Steinwedel (in view of Redmann) by adding the lead-in and tail measures of Reynolds to facilitate synchronicity in a user's performance alongside a musical track (Reynolds col 4, lines 31-33). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHILIP SCOLES whose telephone number is (703)756-1831. The examiner can normally be reached Monday-Friday 8:30-4:30 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dedei Hammond can be reached on 571-270-7938. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHILIP G SCOLES/ Examiner, Art Unit 2837 /JEFFREY DONELS/Primary Examiner, Art Unit 2837
Read full office action

Prosecution Timeline

Feb 18, 2022
Application Filed
Jan 10, 2025
Non-Final Rejection — §103
May 05, 2025
Applicant Interview (Telephonic)
May 05, 2025
Examiner Interview Summary
May 06, 2025
Applicant Interview (Telephonic)
May 16, 2025
Response Filed
Jul 01, 2025
Examiner Interview Summary
Oct 09, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603073
ELECTRONIC PERCUSSION INSTRUMENT, CONTROL DEVICE FOR ELECTRONIC PERCUSSION INSTRUMENT, AND CONTROL METHOD THEREFOR
2y 5m to grant Granted Apr 14, 2026
Patent 12597405
AUTO-RECORDING FOR MUSICAL INSTRUMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12597406
ELECTRONIC CYMBAL AND STRIKING DETECTION METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12586552
MULTI-LEVEL AUDIO SEGMENTATION USING DEEP EMBEDDINGS
2y 5m to grant Granted Mar 24, 2026
Patent 12579962
DEVICE AND ELECTRONIC MUSICAL INSTRUMENT
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
56%
Grant Probability
77%
With Interview (+21.3%)
3y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 54 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month