Prosecution Insights
Last updated: April 19, 2026
Application No. 18/783,186

AUDIO PROCESSING

Non-Final OA §102
Filed
Jul 24, 2024
Examiner
BLAIR, KILE O
Art Unit
2691
Tech Center
2600 — Communications
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
63%
Grant Probability
Moderate
1-2
OA Rounds
2y 8m
To Grant
70%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
429 granted / 682 resolved
+0.9% vs TC avg
Moderate +7% lift
Without
With
+7.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
20 currently pending
Career history
702
Total Applications
across all art units

Statute-Specific Performance

§101
4.8%
-35.2% vs TC avg
§103
48.0%
+8.0% vs TC avg
§102
26.8%
-13.2% vs TC avg
§112
15.5%
-24.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 682 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Schevciw (US 20210409888, IDS 1/14/25). Regarding claim 1, Schevciw teaches a device comprising: a memory (memory 110, [0125]) configured to store audio data associated with an immersive audio environment (media filed, [0125]); and one or more processors (processor 120, fig 2) configured to: obtain pose data for a listener in the immersive audio environment (pose, [0135]); determine a current listener pose based on the pose data and one or more pose constraints (movement, [0131]); obtain, based on the current listener pose, a rendered asset associated with the immersive audio environment (sound field representation, [0128]); and generate an output audio signal based on the rendered asset (output audio data, [0128]). Regarding claim 2, Schevciw teaches the device of claim 1, wherein the one or more pose constraints include a human body movement constraint (translation between first and second pose, [0237]). Regarding claim 3, Schevciw teaches the device of claim 2, wherein the human body movement constraint corresponds to a velocity constraint (velocity, [0135]). Regarding claim 4, Schevciw teaches the device of claim 2, wherein the human body movement constraint corresponds to an acceleration constraint (acceleration, [0135]). Regarding claim 5, Schevciw teaches the device of claim 2, wherein the human body movement constraint corresponds to a constraint on a hand or torso pose of the listener relative to a head pose of the listener (head tracker data, [0254]). Regarding claim 6, Schevciw teaches the device of claim 1, wherein the one or more pose constraints include a boundary constraint that indicates a boundary associated with the immersive audio environment, and wherein the one or more processors are configured to determine the current listener pose such that the current listener pose is limited by the boundary (virtual location within a game environment, [0190]). Regarding claim 7, Schevciw teaches the device of claim 1, wherein the one or more processors are configured to: obtain a pose based on the pose data; and determine whether the pose violates at least one of the one or more pose constraints (translation exceeding between first and second pose, [0237]). Regarding claim 8, Schevciw teaches the device of claim 7, wherein the one or more processors are configured to, based on a determination that the pose does not violate the one or more pose constraints, use the pose as the current listener pose (select representation of the sound field associated with fifth viewpoint even though wearer has not moved, [0237]). Regarding claim 9, Schevciw teaches the device of claim 7, wherein the one or more processors are configured to, based on a determination that the pose violates at least one of the one or more pose constraints (translation exceeding between first and second pose, [0237]), determine the current listener pose based on a prior listener pose that did not violate the one or more pose constraints (select representation of the sound field associated with fifth viewpoint even though wearer has not moved, [0237]). Regarding claim 10, Schevciw teaches the device of claim 9, wherein the one or more processors are configured to, based on the determination that the pose violates at least one of the one or more pose constraints, determine a predicted listener pose based on a prior predicted listener pose associated with the prior listener pose (select representation of the sound field associated with fifth viewpoint even though wearer has not moved, [0237]). Regarding claim 11, Schevciw teaches the device of claim 7, wherein the one or more processors are configured to, based on a determination that the pose violates at least one of the one or more pose constraints, determine the current listener pose based on an adjustment of the pose to satisfy the one or more pose constraints (transition from one representation to second representation, [0237]). Regarding claim 12, Schevciw teaches the device of claim 1, wherein the pose data includes first pose data associated with a head of a listener (headset is used as the origin, [0244]) and second pose data associated with at least one of a torso of the listener or a hand of the listener (speed at which the head turns is inherently relative to the torso, [0245]). Regarding claim 13, Schevciw teaches the device of claim 12, wherein the first pose data is obtained from a first device and wherein the second pose data is received from a second device that is distinct from the first device (first and second device, fig 2). Regarding claim 14, Schevciw teaches the device of claim 1, wherein, to obtain the rendered asset, the one or more processors are configured to: determine a target asset based on the pose data; and generate an asset retrieval request to retrieve the target asset from a storage location (time-stamped location information 656 (e.g., indicating user positions (e.g., using (x,y,z) coordinates) and time stamps associated with the user positions, [0189]). Regarding claim 15, Schevciw teaches the device of claim 1, further comprising a pose sensor coupled to the one or more processors, and wherein the pose sensor is configured to provide at least a portion of the pose data (head tracking data from one or more sensors, [0291]). Regarding claim 16, Schevciw teaches the device of claim 15, wherein the pose sensor and the one or more processors are integrated within a head-mounted wearable device (wearable device has audio FX renderer, fig 7). Regarding claim 17, Schevciw teaches the device of claim 1, wherein the one or more processors are integrated within an immersive audio player device (wearable device has audio FX renderer, fig 7). Regarding claim 18, Schevciw teaches the device of claim 1, further comprising a modem coupled to the one or more processors and configured to receive the pose data from a device that includes a pose sensor (modem, [0374]). Claims 19 and 20 are each substantially similar to claim 1 and are rejected for the same reasons. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kile Blair whose telephone number is (571)270-3544. The examiner can normally be reached M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached at 571-272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KILE O BLAIR/Primary Examiner, Art Unit 2691
Read full office action

Prosecution Timeline

Jul 24, 2024
Application Filed
Feb 21, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593181
HEARING DEVICE ARRANGEMENT AND METHOD FOR AUDIO SIGNAL PROCESSING
2y 5m to grant Granted Mar 31, 2026
Patent 12593155
SPEAKER BOX
2y 5m to grant Granted Mar 31, 2026
Patent 12581229
ELECTRONIC DEVICE INCLUDING SPEAKER MODULE
2y 5m to grant Granted Mar 17, 2026
Patent 12581260
SOUND PROCESSING APPARATUS AND SOUND PROCESSING SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12563332
OPEN EARPHONES
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
63%
Grant Probability
70%
With Interview (+7.4%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 682 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month