Prosecution Insights
Last updated: April 19, 2026
Application No. 17/984,117

AUDIO PROCESSING

Final Rejection §102
Filed
Nov 09, 2022
Examiner
KRZYSTAN, ALEXANDER J
Art Unit
2694
Tech Center
2600 — Communications
Assignee
Blackmagic Design Pty Ltd.
OA Round
4 (Final)
81%
Grant Probability
Favorable
5-6
OA Rounds
3y 1m
To Grant
88%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
913 granted / 1121 resolved
+19.4% vs TC avg
Moderate +7% lift
Without
With
+6.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
38 currently pending
Career history
1159
Total Applications
across all art units

Statute-Specific Performance

§101
2.7%
-37.3% vs TC avg
§103
37.1%
-2.9% vs TC avg
§102
24.3%
-15.7% vs TC avg
§112
21.0%
-19.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1121 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claim Objections As per claim 19, entity entity should be entity. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-25 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Rockel et al (US 20220407984 A1). As per claim 1, Rockel discloses A method of a performing a plurality of processing operations on a plurality of audio entities using a computer system having multiple data processing units, each digital audio entity/track/object/channel/data comprising a plurality of samples (para. 61: production devices that generate or provide production signals include cameras, audio mixers, where the synchronization elements implemented on the production device in the system as shown in fig. 1 in the context of an audio mixer, where the sync information defines the processing of groups of samples of audio tracks as that is what is comprised by an audio mixer, since the audio mixer can be used to provide: Terms such as input, output, and production signal, unless otherwise indicated, are also used herein in a general sense, to denote feeds, sources, video production outputs (e.g., live/television/broadcast video) and/or other forms of inputs or outputs that may be used or generated in video production; per para 63, those cited signals comprise digital and as such the audio mixer must process and mix audio tracks digitally ), the method including: Allocating (synchronizing per the synchronization element to perform the tasks per para. 21, a task at a target execution time) each data processing operation on a digital audio entity (the tasks per para. 25,59 and or events per para. 69 in the context of an audio mixer are functions performed on audio tracks per the audio mixer) to one of said multiple data processing units of said computer system (the logical part of the digital processing that performs an action of a task or event in the audio mixer), such that said data processing operation is performed (para. 69: A task or event refers to an allocated action that is to be executed or performed by a production device) on said allocated one of said data processing units; wherein said allocation is based at least partly on an expected duration of execution (the target execution time cited above is relative to a clocking signal derived from a shared clocking source coupled to each respective processing stage for the purpose of performing the cited digital processing, noting that each clocking signal comprises a known width which defines an expected maximum duration of execution of a particular function for that particular clock cycle, noting that said predefined clock cycles, or combinations of clock cycles are akin to the safety margin as cited in applicant’s specification 110; additionally, each clocking signal comprises a known jitter amount which defined a respective additional expected duration of execution for a given clock cycle based task ) for the data processing operation on said one of said data processing units to which it is allocated; performing said plurality of data processing operations on said plurality of digital audio entities according to said allocation (the tasks are performed on the audio/video data based on the index processing of para. 21 and the execution time); and outputting processed audio (para. 61 the devices can comprise: Examples of production devices that generate or provide production signals include cameras, audio mixers; where audio mixers by definition output audio, and will output the audio/perform a task based on the index processing per para. 21). As per claim 11, a method of a performing a plurality of data processing operations on a plurality of digital audio entities comprising a plurality of samples using a computer system having multiple data processing units, the method including: determining if each data processing operation on a digital audio entity is a realtime data processing operation that must be performed in a predetermined time period, or a non-realtime data processing operation (para. 41 the tasks that are to be performed at a predefined execution time, which comprises a period of time as read by a digital processor) those tasks with an immediate/the most immediate execution time are realtime since they must occur immediately, where those tasks with a more later execution time are non-realtime and are performed after the realtime/immediate task; allocating each realtime data processing operation to one of said data processing units, such that said realtime data processing operation is performed on said one of said data processing units within the predetermined time period (synchronizing per the synchronization element to perform the tasks per para. 21, a task at a target execution time) each data processing operation (the tasks per para. 25,59 and or events per para. 69), wherein said allocation is based at least partly on an expected duration of execution (the target execution time cited above is relative to a clocking signal derived from a shared clocking source coupled to each respective processing stage for the purpose of performing the cited digital processing, noting that each clocking signal comprises a known width which defines an expected duration of execution of a particular function for that particular clock cycle, noting that said predefined clock cycles, or combinations of clock cycles are akin to the safety margin as cited in applicant’s specification 110; additionally, each clocking signal comprises a known jitter amount which defined a respective additional expected duration of execution for a given clock cycle based task ) for the realtime data processing operation on said one of said data processing units to which it is allocated (per the claim 1 rejection); allocating each non-realtime data processing operation to one of said data processing units, such that each said non-realtime data processing operation is performed on said one of said data processing units (the more immediate execution times assigned to tasks are realtime operations allocated to data processing units, where the tasks with non immediate execution times are allocated to respective data units based on the later execution times), wherein said allocation is based at least partly on an expected execution time for the non-realtime data processing operation on said one of said data processing units to which it is allocated (the allocation is based on the execution time, expected duration of execution (the target execution time cited above is relative to a clocking signal derived from a shared clocking source coupled to each respective processing stage for the purpose of performing the cited digital processing, noting that each clocking signal comprises a known width which defines an expected duration of execution of a particular function for that particular clock cycle, noting that said predefined clock cycles, or combinations of clock cycles are akin to the safety margin as cited in applicant’s specification 110; additionally, each clocking signal comprises a known jitter amount which defined a respective additional expected duration of execution for a given clock cycle based task )0); performing said plurality of processing operations on said plurality of data audio entities according to said allocation (the tasks are performed on the audio/video data based on the index processing of para. 21 and the execution time and the audio mixer per para 61); and outputting processed audio (per the claim 1 rejection). As per claim 2, allocating each data processing operation to one of said data processing units includes identifying one or more realtime processing/data processing operations that must be performed in a predetermined time period (para. 41 the tasks that are to be performed at a predefined execution time, which comprises a period of time as read by a digital processor ), and allocating said realtime processing operations to be performed before non-realtime processing operations (those tasks with an immediate/the most immediate execution time are realtime since they must occur immediately, where those tasks with a more later execution time are non-realtime and are performed after the realtime/immediate task). As per claim 3, the method of claim 1 wherein allocating each data processing operation to one of said data processing units includes identifying one or more realtime processing/data processing operations that must be performed in a predetermined time period (the particular execution time for a task as read by a processor, is a time period), and allocating said realtime processing operations such that they are to be performed on separate data processing units to non-realtime data processing operations (each particular task is performed by a respective data unit, where the data unit is the logical part of the processor that performs a particular task). As per claim 4, the method of claim 1 which further includes: determining a revised allocation of each data processing operation to one of said data processing units (the execution time, can be determined, or revised per para. 114 where revised execution times define a revised allocation, where the logical means of executing the task/data processing operation at a particular time is revised). As per claim 5, the method of claim 4 wherein the method includes: allocating some or each data processing operation to one of said data processing units according to said revised allocation (the logical means of performing each task at a respective particular time is the data processing unit, where the tasks are assigned execution times/allocated to various data processing unit which are defined by the particular execution time for each task). As per claim 6, the method of claim 6 wherein either or both of: allocating some or each data processing operation to one of said data processing units according to said revised allocation (the allocation to a data unit defining a particular execution time for each particular task based on the revised execution time); and determining a revised allocation of each data processing operation to one of said data processing units (not mapped); is performed either, or both of periodically, or in response a re-allocation event (the revised execution time). As per claim 7, The method of claim 6 wherein a re-allocation event is any one of the following events: the plurality of processing/data processing operations to be performed changes; the plurality of audio entities changes; an actual duration of execution (the target execution time cited above is relative to a clocking signal derived from a shared clocking source coupled to each respective processing stage for the purpose of performing the cited digital processing, noting that each clocking signal comprises a known width which defines an expected duration of execution of a particular function for that particular clock cycle, noting that said predefined clock cycles, or combinations of clock cycles are akin to the safety margin as cited in applicant’s specification 110; additionally, each clocking signal comprises a known jitter amount which defined a respective additional expected duration of execution for a given clock cycle based task ) of one or more processing operations on its allocated processing unit differs from a corresponding execution time and expected duration of execution by a predetermined amount (para. 114 the target execution time may be determined or revised based on the communication delay, where the delay is a difference between actual and corresponding execution times ); said plurality of processing operations to be performed on said audio entities are not completed in a predetermined time period using a current allocation; it is determined that said plurality of processing operations to be performed on said audio entities cannot be completed in a predetermined time period using a current allocation; an alternative allocation has been identified that improves overall processing time or efficiency by a predetermined amount; the number and/or permitted utilization of processing units in the computer system has changed. As per claim 8, the method of claim 1 wherein determining an expected execution time for a data processing operation on said one of said data processing units includes accessing an execution time, duration of execution database containing expected execution time and duration of execution data (per the revised execution time and duration of execution cited above, the current execution time and associated durations must be stored/database and accessed in order to be revised). As per claim 9, The method of claim 8 wherein the expected execution time data includes one or more of: standardized execution time and duration of execution data for a plurality of processing/data processing operations; and customized execution time data for a plurality of processing operations that indicate an expected execution time and duration of execution for said processing operations on said computer system (the execution time for each task). As per claim 10, the method of claim 8, wherein the method further includes: determining an actual execution time and duration of execution for a processing/data processing operation (part of determining the delay cited in the claim 7 rejection); and updating the expected execution time data (the revised execution time and duration of execution of a subsequent clocking cycle and its associated durations). As per claim 12, the method of claim 11 wherein the method includes allocating said realtime processing/data processing operations before the allocation of non-realtime processing operations (per the claim 2 rejection). As per claim 13, the method of claim 11 wherein the method includes allocating realtime processing/data processing operations such that they are to be performed on separate data processing units to non-realtime data processing operations (each processing operation is allocated to a distinct data processing unit, noting that the data processing unit is the logical portion of the processor that applies the execution time to a particular processing operation/task). As per claim 14, the method of claim 11 wherein a non-realtime processing operation/data processing can be performed in a time period twice as long as the predetermined time period (the non-realtime operations are by definition those that are not immediately to be performed, as such they are to be performed in a period at least twice as long because they are not in the same execution time/period and are at least one time period after the realtime tasks which is at least twice as long as the period of the immediate/realtime task). As per claim 15, The method of claim 11 wherein the multiple data processing units include one or more data processing/data processing units that are high speed processing units, and one or more data processing units that are low speed processing units, and wherein the method includes preferentially allocating realtime processing operations to said high speed data processing units (the realtime processing operations are high speed because they are performed sooner than/at a higher action-over-time ratio perform their processing on the allocated task, where the non-realtime units are low speed because they operate at a lower action-over-time ratio). As per claim 16, the method of claim 11 wherein at least the expected execution time and duration of execution for at least one realtime data processing operation is stored in an execution time database (the revised execution time must be stored/database in order to be revised), and the method includes: determining an actual execution time and duration of execution for at least one realtime data processing operation; and updating said execution time database (the revised execution time requires updating the memory/database with a new execution time associated with a processing operation). As per claim 17, the method of claim 16 which further includes determining a revised allocation of at least each realtime data processing operation to one of said data processing units using the updated execution time database (the implementation of updated execution times is a revised allocation as part of the synchronization process where the memory storing the executions times is the database). As per claim 18, the method of claim 17 wherein the method includes: allocating some or each realtime data processing operation to one of said data processing units according to said revised allocation (via the revised execution time, which defines a new data processing unit); performing said plurality of processing operations/data processing on said plurality of audio entities according to said revised allocation (per the claim 1 rejection based on the revised execution time).; and outputting processed audio (per the claim 1 rejection based on the revised execution time). As per claim 19, the claim 1 rejection discloses an audio processing system including multiple data processing units (per the claim 1 rejection), said audio processing system being configured to perform data processing operations on a plurality of digital audio entities, each digital audio entity comprising a plurality of samples (the tasks on the audio mixer outputs per the claim 1 rejection), wherein each audio/digital audio entity entity has at least one data processing operation performed on it (the outputting via synchronization per the claim 1 rejection), the audio processing system including a control unit (the synchronization element 16) arranged to allocate each data processing operation to one of said data processing units (per claim 1 and 11 rejections), such that said data processing operation is performed on said one of said data processing units (the synchronization on a task per the claim 1 rejection), wherein the control unit performs said allocation at least partly on the basis of an expected execution time and duration of execution for the data processing operation on said one of said data processing units to which it is allocated (per claim 1 rejection). As per claim 20, the audio processing system of claim 19 wherein the control unit is arranged to identify one or more realtime data processing operations that must be performed in a predetermined time period (required as part of the step of para., 16: is configured to transmit the respective second message to each video production control system node at a respective time in advance of the target execution time that is based on the respective round-trip message time associated with each video production control system node,), and allocate said realtime processing operations to processing units such that said realtime processing units are performed before non-realtime processing operations (claim 2 and 11 rejections). As per claim 21, the audio processing system of claim 19 wherein the control unit generates a revised allocation of each data processing operation to one of said data processing units (the means of generating the revised execution time as used by the data processing units). As per claim 22, the audio processing system of claim 19 which further includes an execution time database containing expected execution time and duration of execution data (per the claim 8 rejection). As per claim 23, the audio processing system of claim 22 which further includes an execution monitoring component configured to determine an actual execution time and duration of execution for a data processing operation and update the execution time database (the means of obtaining the cited revised execution time per the claim 1 and 11 rejections). As per claim 24, A non-transitory computer readable medium configured to carry instructions, which when executed by a computer system, cause the computer system to perform a method as claimed in claim 1 (the system and method of the claim 1 rejection require memory in order to hold software for a digital processor in order to implement the cited). As per claim 25, the non-transitory computer readable medium of claim 24 comprising instructions to implement a software application comprising any one of: a digital audio workstation; and video editing software (the application is a video production/editing system which requires a software application in order to implement the cited method). Response to Arguments The submitted arguments have been considered but are moot in view of the new grounds of rejection. Previous responses to previous arguments As per applicant’s argument that : ”Rockel fails to teach or suggest " allocating each data processing operation to one of said data processing units, such that said data processing operation is performed on said one of said data processing units; wherein said allocation is based at least partly on an expected execution time for the data processing operation on said one of said data processing units to which it is allocated." The examiner responds with, data processing unit, and expected execution time, and data processing operation are being read reasonably broadly as per the final rejection above, with the respective functions cited in Rockel in the claim 1 rejection. As per applicant’s argument that Rockel does not disclose any allocation of tasks to different production devices (remarks page 9). The examiner notes that a broadly recited ‘data processing unit’ which is drawn to the portion of the digital processor based system shown in fig. 1 of Rockel, where different nodes are allocated in order to allocate respective tasks per the control, timing and sync signaling, and also via the capabilities of the devices as described in para. 63 ). As per applicant’s argument, the length of this delay does not affect allocation of a device to a task, the examiner disagrees and notes that the signaling and synchronization are reasonably read as part of the allocation of a task to a production device/unit. Further, applicant’s arguments and specification are silent as to an actual set of processor level signaling that are attributed to the allocation step. As per applicant’s arguments regarding the claim 25 rejection, the examiner notes the system of Rockel comprises a video editing software as it is (abstract) a video production system which by definition comprises video editing, and or further noting the editing functions recited in para. 4 as part of the video production/editing system including transitions and or camera movement.) As per applicant’s argument that Rockel does not disclose determining if each operation is a realtime or nonrealtime operation, the examiner notes the rejection above clearly recites the examiner’s stance on what is drawn to each of realtime and non realtime processing operations As per applicant’s arguments regarding claim 19, the examiner’s responses above pertaining to the allocation and or allocating, are applied to said arguments. Applicant's amendment necessitate d the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDER KRZYSTAN whose telephone number is 571-272-7498, and whose email address is alexander.krzystan@uspto.gov The examiner can usually be reached on m-f 7:30-4:00 est. If attempts to reach the examiner by telephone or email are unsuccessful, the examiner’s supervisor, Fan Tsang can be reached on (571) 272-7547. The fax phone numbers for the organization where this application or proceeding is assigned are 571-273-8300 for regular communications and 571-273-8300 for After Final communications. /ALEXANDER KRZYSTAN/Primary Examiner, Art Unit 2653 Examiner Alexander Krzystan February 13, 2026
Read full office action

Prosecution Timeline

Nov 09, 2022
Application Filed
Oct 16, 2024
Non-Final Rejection — §102
Jan 21, 2025
Response Filed
Feb 27, 2025
Final Rejection — §102
Jul 03, 2025
Request for Continued Examination
Jul 07, 2025
Response after Non-Final Action
Sep 18, 2025
Non-Final Rejection — §102
Jan 20, 2026
Response Filed
Feb 13, 2026
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598440
RENDERING OF OCCLUDED AUDIO ELEMENTS
2y 5m to grant Granted Apr 07, 2026
Patent 12593170
SWITCHING METHOD FOR AUDIO OUTPUT CHANNEL, AND DISPLAY DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12573410
DECODER, ENCODER, AND METHOD FOR INFORMED LOUDNESS ESTIMATION IN OBJECT-BASED AUDIO CODING SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Patent 12574675
Acoustic Device and Method
2y 5m to grant Granted Mar 10, 2026
Patent 12541554
TRANSCRIPT AGGREGATON FOR NON-LINEAR EDITORS
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
81%
Grant Probability
88%
With Interview (+6.9%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 1121 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month