Prosecution Insights
Last updated: April 19, 2026
Application No. 18/433,267

OFFLOADING A CPU-BASED AUDIO PUMP AND PROCESSING TO AN AUDIO CO-PROCESSOR

Non-Final OA §102§103
Filed
Feb 05, 2024
Examiner
SAUNDERS JR, JOSEPH
Art Unit
2692
Tech Center
2600 — Communications
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
93%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
538 granted / 740 resolved
+10.7% vs TC avg
Strong +21% interview lift
Without
With
+20.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
27 currently pending
Career history
767
Total Applications
across all art units

Statute-Specific Performance

§101
5.1%
-34.9% vs TC avg
§103
40.0%
+0.0% vs TC avg
§102
29.6%
-10.4% vs TC avg
§112
14.6%
-25.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 740 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office action is based on the communications filed February 5, 2024. Claims 1 – 20 are currently pending and considered below. Claim Objections Claims 9 and 10 are objected to because of the following informalities: Dependent claims 9 and 10 should also recite “The one or more non-transitory computer-readable media” as in independent claim 8. Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1 – 3 and 7 is/are rejected under 35 U.S.C. 102(a)(1) and 35 U.S.C. 102(a)(2) as being anticipated by Puryear et al. (US 2004/0064210 A1), hereinafter Puryear. Claim 1: Puryear discloses in a computing device comprising a central processing unit (CPU) (see at least, “The host processor can be included in a personal computer. The host processor executes a driver that has a plurality of driver components. Each driver component has an instruction set executable by a respective DSP to transform an audio data stream in a predetermined manner that is different from that of the other driver components. The apparatus also has audio input and output devices for inputting and outputting audio data streams,” Puryear [0035]) and an audio co-processor (see at least, “In one implementation, an apparatus has a plurality of digital signal processors (DSP) in communication with a host processor. Each DSP is included in a separate piece of hardware, such as an accelerator card that is manufactured by a different manufacturer (e.g. (i) Creative Labs, Inc. of Milpitas, Calif., USA, (ii) Nvidia, Inc. of Santa Clara, Calif., USA, (iii) etc.),” Puryear [0035]), a method comprising, with the CPU: receiving a request from an application to add an audio stream to a device graph, the device graph being associated with an audio endpoint (see at least, “An application 102 communicates with an instance of audio rendering hardware 132C when the application 102 wants to process streaming audio media content. Audio filter graph manager 112 automatically creates audio filter graph 110A by invoking the appropriate filters. The communication of media content between filters is achieved by either (1) coupling virtual output pins of one filter to the virtual input pins of requesting filter; or (2) by scheduling object calls between appropriate filters to communicate the requested information. Audio filter graph manager 112 receives streaming data from the invoking application or an external source (not shown),” Puryear [0007]); responsive to receiving the request, modifying the device graph to incorporate the audio stream (see at least, “Audio filter graph manager 112 controls the data structure of audio filter graph 110A and the way data moves through audio filter graph 110A,” Puryear [0004]), including causing the audio co-processor to create a processing pipeline for the audio stream in the device graph (see at least, “Audio filter graphs work with data representing a variety of media (or non-media) data types, each type characterized by a data stream that is processed by the filter components comprising the audio filter graph. A filter positioned closer to the source of the data is referred to as an upstream filter, while those further down the processing chain is referred to as a downstream filter. For each data stream that the filter handles it exposes at least one virtual pin (i.e., distinguished from a physical pin such as one might find on an integrated circuit). A virtual pin can be implemented as a COM object that represents a point of connection for a unidirectional data stream on a filter. Input pins represent inputs and accept data into the filter, while output pins represent outputs and provide data to other filters. Each of the filters include at least one memory buffer, wherein communication of the media stream between filters is accomplished by a series of "copy" operations from one filter to another,” Puryear [0005], “In accordance with the illustrated example embodiment of an audio stack 300 seen in FIG. 3, one or more application program(s) 302 are coupled by an interface 304 to an audio filter graph manager 312. Audio filter graph manager 312 communicates with audio filter graph 310 that receives input from source hardware 301. Hardware accelerator 305, which can be one or more audio accelerator cards, uses drivers represented in audio filter graph 310 to provide local and global effects upon audio data streams as well as cancellation of echoes due to input received from source hardware 301. Table A, below, reflects successively linear processing by a series of filters and each of their respective corresponding hardware accelerator functions,” Puryear [0027], “Audio stack 400 provides an interface 404 to an audio filter graph manager 412. Audio filter graph manager 412 communicates with an audio filter graph 410 and to application program(s) 402 through interface 402 for the processing of audio with various hardware in one or more hardware accelerator cards 405. As used herein, application program(s) 402 are intended to represent any of a wide variety of applications which may benefit from an audio data stream processing application. Audio stack 400 has various filters linearly arranged and situated prior to a mixer hardware filter 422A,” Puryear [0030]), the processing pipeline comprising an audio processing object (APO) (see at least, “The filters of audio filter graph 110A can be implemented as COM objects, each implementing one or more interfaces, and each containing a predefined set of functions, called methods. Methods are called by one of the application programs 102 or other component objects in order to communicate with the object exposing the interface. The calling application program can also call methods or interfaces exposed by the object of the audio filter graph manager 112,” Puryear [0004], “For example, an effect filter is selectively invoked to introduce a particular effect (e.g., 3D audio positioning reverb, audio distortion, etc.) to a media stream,” Puryear [0007]); and directing the audio co-processor to process the audio stream with the modified device graph (see at least, “Similar to FIG. 2, FIG. 3 shows the separation of the hardware mixing filter and its corresponding accelerator hardware (320A, 320B) from the render filter and its corresponding accelerator hardware (324A, 324B). As such, each of filters 322A and 323A can process the mixed final audio data stream that is output from hardware mixing filter 320A,” Puryear [0028], “Similar to FIGS. 2 and 3, FIG. 4 shows the separation of the hardware mixing filter and its corresponding accelerator hardware (422A, 422B) from the render filter and its corresponding accelerator hardware (432A, 432B). As such, one (1) virtual output pin from mixing hardware filter 422A provides the final audio data stream as an input to filter 424A. Filter 424A is the first in a linear series of other filters (426A, 428, and 430). In one implementation, more than one GFX filter can be connected together. In may be preferred that the GFX filters be applied prior to the AEC filter for global effect processing. Depending on where a GFX filter is positioned in an audio filter graph, it could become a local effect if it's used in the local stream (pre-mixer) or a GFX filter if it's applied on the mixed stream. Sometimes there could be two instances of an effects filter, where one works as a GFX filter while the other plays as a local effect filter. After the mixed final audio data stream is processed by Audio Render (Portclass) filter 432A and its corresponding Rendering System Hardware 432B, one or more speakers 434 can then render therefrom an analog version of the final audio data stream,” Puryear [0033], “It would further be an advance in the art to provide means for hardware acceleration of various effects on simultaneously produced audio data streams flowing through an audio filter graph, including GFX and AEC, the end result of which is heard in an analog rendering,” Puryear [0015]). Claim 2: Puryear discloses the method of claim 1, wherein the APO comprises a configuration interface accessible to the CPU (see at least, “Mixer hardware filter 122A is seen in FIG. 2 has having four ( 4) virtual input pins that receive audio data streams. For simplicity in illustration, all virtual pins are not shown on all filters in FIG. 2. One (1) virtual input pin receives an audio data stream from KMixer.sys filter 114 and three (2) virtual input pins receive audio data streams from the DSound-Hardware API 108 of the interface 104,” Puryear [0026]) and a streaming interface accessible to the audio co-processor (see at least, “A software and hardware interface, seen in FIG. 2 as a double arrow line, provides communication between mixer hardware filter 122A and mixing hardware 122B,” Puryear [0026]), wherein the APO is one of a plurality of APOs (see at least, “The filters of audio filter graph 110A can be implemented as COM objects, each implementing one or more interfaces, and each containing a predefined set of functions, called methods. Methods are called by one of the application programs 102 or other component objects in order to communicate with the object exposing the interface. The calling application program can also call methods or interfaces exposed by the object of the audio filter graph manager 112,” Puryear [0004], “For example, an effect filter is selectively invoked to introduce a particular effect (e.g., 3D audio positioning reverb, audio distortion, etc.) to a media stream,” Puryear [0007], and wherein the processing pipeline comprises the plurality of APOs and a plurality of connection buffers, the APOs being linked together by the connection buffers to form the processing pipeline (see at least, “Audio filter graphs work with data representing a variety of media (or non-media) data types, each type characterized by a data stream that is processed by the filter components comprising the audio filter graph. A filter positioned closer to the source of the data is referred to as an upstream filter, while those further down the processing chain is referred to as a downstream filter. For each data stream that the filter handles it exposes at least one virtual pin (i.e., distinguished from a physical pin such as one might find on an integrated circuit). A virtual pin can be implemented as a COM object that represents a point of connection for a unidirectional data stream on a filter. Input pins represent inputs and accept data into the filter, while output pins represent outputs and provide data to other filters. Each of the filters include at least one memory buffer, wherein communication of the media stream between filters is accomplished by a series of "copy" operations from one filter to another,” Puryear [0005]). Claim 3: Puryear discloses the method of claim 2, wherein modifying the device graph to incorporate the audio stream further comprises allocating the connection buffers from memory accessible to the audio co-processor (see at least, “Each of the filters include at least one memory buffer, wherein communication of the media stream between filters is accomplished by a series of "copy" operations from one filter to another,” Puryear [0005], “A software and hardware interface, seen in FIG. 2 as a double arrow line, provides communication between mixer hardware filter 122A and mixing hardware 122B,” Puryear [0026]), and wherein metadata for the connection buffers is accessible to the CPU (see at least, “Audio filter graphs work with data representing a variety of media (or non-media) data types, each type characterized by a data stream that is processed by the filter components comprising the audio filter graph. A filter positioned closer to the source of the data is referred to as an upstream filter, while those further down the processing chain is referred to as a downstream filter. For each data stream that the filter handles it exposes at least one virtual pin (i.e., distinguished from a physical pin such as one might find on an integrated circuit). A virtual pin can be implemented as a COM object that represents a point of connection for a unidirectional data stream on a filter. Input pins represent inputs and accept data into the filter, while output pins represent outputs and provide data to other filters,” Puryear [0005]). Claim 7: Puryear discloses the method of claim 1, wherein modifying the device graph to incorporate the audio stream further comprises determining a topology of the device graph with a CPU-based component of an audio processor object of the device graph, and wherein the topology comprises a list of APOs and connection buffers in the processing pipeline (see at least, “Audio filter graph manager 112 controls the data structure of audio filter graph 110A and the way data moves through audio filter graph 110A,” Puryear [0004], “Audio filter graphs work with data representing a variety of media (or non-media) data types, each type characterized by a data stream that is processed by the filter components comprising the audio filter graph. A filter positioned closer to the source of the data is referred to as an upstream filter, while those further down the processing chain is referred to as a downstream filter. For each data stream that the filter handles it exposes at least one virtual pin (i.e., distinguished from a physical pin such as one might find on an integrated circuit). A virtual pin can be implemented as a COM object that represents a point of connection for a unidirectional data stream on a filter. Input pins represent inputs and accept data into the filter, while output pins represent outputs and provide data to other filters. Each of the filters include at least one memory buffer, wherein communication of the media stream between filters is accomplished by a series of "copy" operations from one filter to another,” Puryear [0005]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 4, 8 – 11, 16, 17, 19, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Puryear in view of Swenson et al. (US 2004/0187043 A1), hereinafter Swenson. Claim 4: Puryear discloses the method of claim 1, but does not disclose further comprising, responsive to receiving the request, with the CPU: creating an application-side shared buffer for the audio stream, the application-side shared buffer being accessible to the application and the audio co-processor. However, Swenson discloses in regards to processing an audio stream with a filter graph (see at least, System Audio Filter Graph Manager 275, Swenson FIG. 2a, “Policy 207 is responsible for being able to describe and set up a desired graph in response to opening the audio device that is driven by master audio device driver 226. Otherwise, most of the structure is set up in environment 200a by a system audio filter graph manager 275,” Swenson [0035],creating an application-side shared buffer for the audio stream, the application-side shared buffer being accessible to the application and the audio co-processor (see at least, “Audio data flows from audio application 206 to Streaming Audio Renderer (SAR) 208. SAR 208 renders audio data to local engine 210 which in turn outputs audio data to buffer 212 for entry into the global engine section 274,” Swenson [0037], buffer 212, Swenson FIG. 2a, “The global engine section 274 receives input from application section 270 and outputs to output devices (not shown) via a Master Audio Device Driver 226. Included in the global engine section 274 is a processor 205. Processor 205 is intended to represent a name of an object that maintains a list of audio effects that are to be processed, where the object calls for these effects to be processed,” Swenson [0034], buffer 312, Swenson FIG.3a, “An audio application/client API 302 creates and configures several of the main components that are used in an audio hardware slaving operation of method 300a. The main components in FIG. 3a that are created and configured by the audio application/client API 302 include an Audio Buffer 312, an Audio Data Source 310, an Audio Pump 306, a Master Audio Engine 308, and other software components (not shown),” Swenson [0060]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the aforementioned features of Swenson to the hardware accelerated global effects in the invention of Puryear thereby allowing for “entry into the global engine section,” Swenson [0037]. Claim 8: Puryear discloses the one or more non-transitory computer-readable media having stored thereon computer-executable instructions for causing an audio co-processor of a computing device (see at least, “In one implementation, an apparatus has a plurality of digital signal processors (DSP) in communication with a host processor. Each DSP is included in a separate piece of hardware, such as an accelerator card that is manufactured by a different manufacturer (e.g. (i) Creative Labs, Inc. of Milpitas, Calif., USA, (ii) Nvidia, Inc. of Santa Clara, Calif., USA, (iii) etc.),” Puryear [0035]), when programmed thereby, to perform operations comprising: responsive to a request from an application to add an audio stream to a device graph (see at least, “An application 102 communicates with an instance of audio rendering hardware 132C when the application 102 wants to process streaming audio media content. Audio filter graph manager 112 automatically creates audio filter graph 110A by invoking the appropriate filters. The communication of media content between filters is achieved by either (1) coupling virtual output pins of one filter to the virtual input pins of requesting filter; or (2) by scheduling object calls between appropriate filters to communicate the requested information. Audio filter graph manager 112 receives streaming data from the invoking application or an external source (not shown),” Puryear [0007]), with the audio co-processor, creating a processing pipeline for the audio stream in the device graph (see at least, “Audio filter graphs work with data representing a variety of media (or non-media) data types, each type characterized by a data stream that is processed by the filter components comprising the audio filter graph. A filter positioned closer to the source of the data is referred to as an upstream filter, while those further down the processing chain is referred to as a downstream filter. For each data stream that the filter handles it exposes at least one virtual pin (i.e., distinguished from a physical pin such as one might find on an integrated circuit). A virtual pin can be implemented as a COM object that represents a point of connection for a unidirectional data stream on a filter. Input pins represent inputs and accept data into the filter, while output pins represent outputs and provide data to other filters. Each of the filters include at least one memory buffer, wherein communication of the media stream between filters is accomplished by a series of "copy" operations from one filter to another,” Puryear [0005], “In accordance with the illustrated example embodiment of an audio stack 300 seen in FIG. 3, one or more application program(s) 302 are coupled by an interface 304 to an audio filter graph manager 312. Audio filter graph manager 312 communicates with audio filter graph 310 that receives input from source hardware 301. Hardware accelerator 305, which can be one or more audio accelerator cards, uses drivers represented in audio filter graph 310 to provide local and global effects upon audio data streams as well as cancellation of echoes due to input received from source hardware 301. Table A, below, reflects successively linear processing by a series of filters and each of their respective corresponding hardware accelerator functions,” Puryear [0027], “Audio stack 400 provides an interface 404 to an audio filter graph manager 412. Audio filter graph manager 412 communicates with an audio filter graph 410 and to application program(s) 402 through interface 402 for the processing of audio with various hardware in one or more hardware accelerator cards 405. As used herein, application program(s) 402 are intended to represent any of a wide variety of applications which may benefit from an audio data stream processing application. Audio stack 400 has various filters linearly arranged and situated prior to a mixer hardware filter 422A,” Puryear [0030]), the processing pipeline comprising an audio processing object (APO) (see at least, “The filters of audio filter graph 110A can be implemented as COM objects, each implementing one or more interfaces, and each containing a predefined set of functions, called methods. Methods are called by one of the application programs 102 or other component objects in order to communicate with the object exposing the interface. The calling application program can also call methods or interfaces exposed by the object of the audio filter graph manager 112,” Puryear [0004], “For example, an effect filter is selectively invoked to introduce a particular effect (e.g., 3D audio positioning reverb, audio distortion, etc.) to a media stream,” Puryear [0007]), and the device graph being associated with an audio endpoint (see at least, “An application 102 communicates with an instance of audio rendering hardware 132C when the application 102 wants to process streaming audio media content. Audio filter graph manager 112 automatically creates audio filter graph 110A by invoking the appropriate filters. The communication of media content between filters is achieved by either (1) coupling virtual output pins of one filter to the virtual input pins of requesting filter; or (2) by scheduling object calls between appropriate filters to communicate the requested information. Audio filter graph manager 112 receives streaming data from the invoking application or an external source (not shown),” Puryear [0007]). Puryear does not disclose processing the audio stream with the device graph by executing an audio pump thread with the audio co-processor. However, Swenson discloses in regards to processing an audio stream with a filter graph (see at least, System Audio Filter Graph Manager 275, Swenson FIG. 2a, “Policy 207 is responsible for being able to describe and set up a desired graph in response to opening the audio device that is driven by master audio device driver 226. Otherwise, most of the structure is set up in environment 200a by a system audio filter graph manager 275,” Swenson [0035], by executing an audio pump thread with the audio co-processor (see at least, “audio pump 203, Swenson FIG. 2a, “Environment 200a enables the local engine 210 to generate audio data for an audio device driven by driver 226 by synchronizing the timing of the audio pump's wake up to the clock of the audio device. The local engine 210 that generates the audio data is running off a reference clock (e.g. the clock of the system timer) that is provided by the operating system to the audio pump 203 that wakes up the local engine 210. The reference clock can be unrelated to and independent of the clock of the audio device. As such, the reference clock may not be synchronized with the clock of the audio device,” Swenson [0040], “The global engine section 274 receives input from application section 270 and outputs to output devices (not shown) via a Master Audio Device Driver 226. Included in the global engine section 274 is a processor 205. Processor 205 is intended to represent a name of an object that maintains a list of audio effects that are to be processed, where the object calls for these effects to be processed,” Swenson [0034]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the aforementioned features of Swenson to the hardware accelerated global effects in the invention of Puryear thereby allowing for the advantage of “a process that synchronizes the reference clock (e.g., the clock of the system timer) with the rate at which the audio device is consuming or generating audio data (the render or capture rate of the audio device). This process can be an effective use of the audio system service of an operating system to match the rate of the audio device driven by a driver. As such, the process can avoid glitches caused by excess audio data in the audio buffer. Accordingly, the implementation provides lower and more predictable latency which would otherwise cause delay between when sound is suppose to be rendered and when the sound it is actually rendered by the audio device. Additionally, the implementation is not computationally intensive because it does not require a CPU-intensive sample rate conversion process,” Swenson [0041]. Claim 9: Puryear and Swenson disclose the one or more computer-readable media of claim 8, wherein the computing device further comprises a central processing unit (CPU), wherein processing the audio stream with the device graph by executing the audio pump thread with the audio co-processor comprises reading audio data for the audio stream from a shared buffer accessible to the CPU and the audio co-processor, and wherein the audio data is read from the shared buffer before any processing on the audio data is performed by the CPU (see at least, “Audio data flows from audio application 206 to Streaming Audio Renderer (SAR) 208. SAR 208 renders audio data to local engine 210 which in turn outputs audio data to buffer 212 for entry into the global engine section 274,” Swenson [0037], buffer 212, Swenson FIG. 2a, “The global engine section 274 receives input from application section 270 and outputs to output devices (not shown) via a Master Audio Device Driver 226. Included in the global engine section 274 is a processor 205. Processor 205 is intended to represent a name of an object that maintains a list of audio effects that are to be processed, where the object calls for these effects to be processed,” Swenson [0034]). Claim 10: Puryear and Swenson disclose the one or more computer-readable media of claim 8, wherein the audio stream is a render audio stream, and wherein processing the audio stream with the device graph by executing the audio pump thread with the audio co-processor comprises: receiving a buffer completion signal from a device driver of the audio endpoint; responsive to receiving the buffer completion signal, initiating execution of the audio pump thread; and invoking the APO of the processing pipeline for the audio stream (see at least, “The clock of the system timer is used to wake up an audio pump software component 203 so that audio data is processed by a local engine 210. The wake up time of the audio pump 203 can be adjusted in fine increments so that the rate that the local engine 210 produces audio data will match the rate that the audio device consumes the audio data. Stated otherwise, adjustments are made to the periodicity of the wake up period such that the audio pump 203 that calls the local engine 210 will wake up on a different period. For instance, the audio pump 203 can wake up shorter than a 10 ms wake up period for every wakeup period so as to adjust the wake up period when the clock of the audio device is faster than the system clock,” Swenson [0039], “Environment 200a enables the local engine 210 to generate audio data for an audio device driven by driver 226 by synchronizing the timing of the audio pump's wake up to the clock of the audio device,” Swenson [0040]). Claim 11: Puryear discloses a computer system comprising: a central processing unit (CPU) (see at least, “The host processor can be included in a personal computer. The host processor executes a driver that has a plurality of driver components. Each driver component has an instruction set executable by a respective DSP to transform an audio data stream in a predetermined manner that is different from that of the other driver components. The apparatus also has audio input and output devices for inputting and outputting audio data streams,” Puryear [0035]); an audio co-processor (see at least, “In one implementation, an apparatus has a plurality of digital signal processors (DSP) in communication with a host processor. Each DSP is included in a separate piece of hardware, such as an accelerator card that is manufactured by a different manufacturer (e.g. (i) Creative Labs, Inc. of Milpitas, Calif., USA, (ii) Nvidia, Inc. of Santa Clara, Calif., USA, (iii) etc.),” Puryear [0035]); memory accessible to both the CPU and the audio co-processor (see at least, “Each of the filters include at least one memory buffer, wherein communication of the media stream between filters is accomplished by a series of "copy" operations from one filter to another,” Puryear [0005], “A software and hardware interface, seen in FIG. 2 as a double arrow line, provides communication between mixer hardware filter 122A and mixing hardware 122B,” Puryear [0026]); and an audio device graph builder comprising: an objects store for an audio endpoint, the objects store comprising a connection buffer (see at least, “An application 102 communicates with an instance of audio rendering hardware 132C when the application 102 wants to process streaming audio media content. Audio filter graph manager 112 automatically creates audio filter graph 110A by invoking the appropriate filters. The communication of media content between filters is achieved by either (1) coupling virtual output pins of one filter to the virtual input pins of requesting filter; or (2) by scheduling object calls between appropriate filters to communicate the requested information. Audio filter graph manager 112 receives streaming data from the invoking application or an external source (not shown),” Puryear [0007]) and an audio processing object (APO) (see at least, “The filters of audio filter graph 110A can be implemented as COM objects, each implementing one or more interfaces, and each containing a predefined set of functions, called methods. Methods are called by one of the application programs 102 or other component objects in order to communicate with the object exposing the interface. The calling application program can also call methods or interfaces exposed by the object of the audio filter graph manager 112,” Puryear [0004], “For example, an effect filter is selectively invoked to introduce a particular effect (e.g., 3D audio positioning reverb, audio distortion, etc.) to a media stream,” Puryear [0007]), the APO comprising a configuration interface accessible to the CPU (see at least, “Mixer hardware filter 122A is seen in FIG. 2 has having four ( 4) virtual input pins that receive audio data streams. For simplicity in illustration, all virtual pins are not shown on all filters in FIG. 2. One (1) virtual input pin receives an audio data stream from KMixer.sys filter 114 and three (2) virtual input pins receive audio data streams from the DSound-Hardware API 108 of the interface 104,” Puryear [0026]) and a streaming interface accessible to the audio co-processor (see at least, “A software and hardware interface, seen in FIG. 2 as a double arrow line, provides communication between mixer hardware filter 122A and mixing hardware 122B,” Puryear [0026]); and a device graph for the audio endpoint (see at least, “An application 102 communicates with an instance of audio rendering hardware 132C when the application 102 wants to process streaming audio media content. Audio filter graph manager 112 automatically creates audio filter graph 110A by invoking the appropriate filters. The communication of media content between filters is achieved by either (1) coupling virtual output pins of one filter to the virtual input pins of requesting filter; or (2) by scheduling object calls between appropriate filters to communicate the requested information. Audio filter graph manager 112 receives streaming data from the invoking application or an external source (not shown),” Puryear [0007]). Puryear does not disclose the device graph comprising an audio pump object executable on the audio co-processor. However, Swenson discloses in regards to processing an audio stream with a device graph (see at least, System Audio Filter Graph Manager 275, Swenson FIG. 2a, “Policy 207 is responsible for being able to describe and set up a desired graph in response to opening the audio device that is driven by master audio device driver 226. Otherwise, most of the structure is set up in environment 200a by a system audio filter graph manager 275,” Swenson [0035], comprising an audio pump object executable on the audio co-processor (see at least, “audio pump 203, Swenson FIG. 2a, “Environment 200a enables the local engine 210 to generate audio data for an audio device driven by driver 226 by synchronizing the timing of the audio pump's wake up to the clock of the audio device. The local engine 210 that generates the audio data is running off a reference clock (e.g. the clock of the system timer) that is provided by the operating system to the audio pump 203 that wakes up the local engine 210. The reference clock can be unrelated to and independent of the clock of the audio device. As such, the reference clock may not be synchronized with the clock of the audio device,” Swenson [0040], “The global engine section 274 receives input from application section 270 and outputs to output devices (not shown) via a Master Audio Device Driver 226. Included in the global engine section 274 is a processor 205. Processor 205 is intended to represent a name of an object that maintains a list of audio effects that are to be processed, where the object calls for these effects to be processed,” Swenson [0034]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the aforementioned features of Swenson to the hardware accelerated global effects in the invention of Puryear thereby allowing for the advantage of “a process that synchronizes the reference clock (e.g., the clock of the system timer) with the rate at which the audio device is consuming or generating audio data (the render or capture rate of the audio device). This process can be an effective use of the audio system service of an operating system to match the rate of the audio device driven by a driver. As such, the process can avoid glitches caused by excess audio data in the audio buffer. Accordingly, the implementation provides lower and more predictable latency which would otherwise cause delay between when sound is suppose to be rendered and when the sound it is actually rendered by the audio device. Additionally, the implementation is not computationally intensive because it does not require a CPU-intensive sample rate conversion process,” Swenson [0041]. Claim 16: Puryear discloses the computer system of claim 11, wherein the connection buffer is stored in memory accessible to the audio co-processor (see at least, “Each of the filters include at least one memory buffer, wherein communication of the media stream between filters is accomplished by a series of "copy" operations from one filter to another,” Puryear [0005], “A software and hardware interface, seen in FIG. 2 as a double arrow line, provides communication between mixer hardware filter 122A and mixing hardware 122B,” Puryear [0026]), and wherein metadata for the connection buffer is accessible to the CPU (see at least, “Audio filter graphs work with data representing a variety of media (or non-media) data types, each type characterized by a data stream that is processed by the filter components comprising the audio filter graph. A filter positioned closer to the source of the data is referred to as an upstream filter, while those further down the processing chain is referred to as a downstream filter. For each data stream that the filter handles it exposes at least one virtual pin (i.e., distinguished from a physical pin such as one might find on an integrated circuit). A virtual pin can be implemented as a COM object that represents a point of connection for a unidirectional data stream on a filter. Input pins represent inputs and accept data into the filter, while output pins represent outputs and provide data to other filters,” Puryear [0005]). Claim 17: Puryear discloses the computer system of claim 11, wherein the device graph further comprises an audio processor object, and wherein the audio processor object is configured to include references to the APO and the connection buffer in the objects store (see at least, “Audio filter graph manager 112 controls the data structure of audio filter graph 110A and the way data moves through audio filter graph 110A,” Puryear [0004], “Audio filter graphs work with data representing a variety of media (or non-media) data types, each type characterized by a data stream that is processed by the filter components comprising the audio filter graph. A filter positioned closer to the source of the data is referred to as an upstream filter, while those further down the processing chain is referred to as a downstream filter. For each data stream that the filter handles it exposes at least one virtual pin (i.e., distinguished from a physical pin such as one might find on an integrated circuit). A virtual pin can be implemented as a COM object that represents a point of connection for a unidirectional data stream on a filter. Input pins represent inputs and accept data into the filter, while output pins represent outputs and provide data to other filters. Each of the filters include at least one memory buffer, wherein communication of the media stream between filters is accomplished by a series of "copy" operations from one filter to another,” Puryear [0005]). Claim 19: Puryear discloses the computer system of claim 11, wherein the audio endpoint is one of a plurality of audio endpoints coupled to the computer system, and wherein the audio device graph builder further comprises an objects store and a device graph for each of the plurality of audio endpoints (see at least, “In the global engine section 274, audio data flows to a global effects module (GFX) 214. Any of various signal processing can be performed by GFX 214 to accomplish a global effect on a mix of audio data streams that have been received in the global engine section 274. GFX 214 outputs the audio data it has processed to a buffer 216 from which the audio data flows in three (3) different directions. These three directions are, respectively, to a Master Audio Device Driver 226 and to two (2) slave audio device drivers 244, 260. Each of the three directions is discussed below,” Swenson [0050], “In the first direction, data flows to a matrix 218 from buffer 216. From matrix 218, audio data flows to a format conversion module 220. From format conversion module 220, the audio data flows to an end point 222,” Swenson [0051], “In the second direction, data flows to a matrix 230 from buffer 216. From matrix 230, audio data flows to SRC 232 and then to a format conversion module 238. Effects other than the SRC 232 are contemplated, including a volume mix down, a channel mix down, etc. From format conversion module 238, the audio data flows to an end point 240,” Swenson [0052], “In the third direction, data flows to a matrix 250 from buffer 216. From matrix 250, audio data flows to SRC 252. Effects other than the SRC 252 are contemplated, including a volume mix down, a channel mix down, etc. From SRC 252, audio data flows to a format conversion module 254. From format conversion module 254, the audio data flows to an end point 256,” Swenson [0053]). Claim 20: Puryear discloses the computer system of claim 19, wherein the plurality of audio endpoints 20. comprises an audio render endpoint and an audio capture endpoint (see at least, “Stereo sound card 104B can be used to render (playback) or capture (record) audio data,” Swenson [0004]). Allowable Subject Matter Claims 5, 6, 12 – 15, and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH SAUNDERS whose telephone number is (571)270-1063. The examiner can normally be reached Monday-Thursday, 9:00 a.m. - 4 p.m., EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn R Edwards can be reached at (571)270-7136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSEPH SAUNDERS JR/Primary Examiner, Art Unit 2692 /CAROLYN R EDWARDS/Supervisory Patent Examiner, Art Unit 2692
Read full office action

Prosecution Timeline

Feb 05, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596883
Audio Analysis for Text Generation
2y 5m to grant Granted Apr 07, 2026
Patent 12598420
AUDIO DEVICE WITH ELECTROSTATIC DISCHARGE PROTECTION
2y 5m to grant Granted Apr 07, 2026
Patent 12593190
User Experience Localizing Binaural Sound During a Telephone Call
2y 5m to grant Granted Mar 31, 2026
Patent 12585425
Light-function audio parameters
2y 5m to grant Granted Mar 24, 2026
Patent 12585422
DATA PROCESSING METHOD OF PROCESSING MULTITRACK AUDIO DATA AND DATA PROCESSING APPARATUS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
93%
With Interview (+20.6%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 740 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month