Prosecution Insights
Last updated: April 19, 2026
Application No. 18/728,919

EFFICIENT LOUDSPEAKER SURFACE SEARCH FOR MULTICHANNEL LOUDSPEAKER SYSTEMS

Non-Final OA §103
Filed
Jul 15, 2024
Examiner
SHAIKH, ZEESHAN MAHMOOD
Art Unit
2658
Tech Center
2600 — Communications
Assignee
Nokia Technologies Oy
OA Round
1 (Non-Final)
52%
Grant Probability
Moderate
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
16 granted / 31 resolved
-10.4% vs TC avg
Strong +55% interview lift
Without
With
+55.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
32 currently pending
Career history
63
Total Applications
across all art units

Statute-Specific Performance

§101
25.7%
-14.3% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 31 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 2/10/2025, 8/7/2025, 9/25/2025, and 10/22/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3 and 5-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. US 20170103766 A1 (hereinafter Kim) in view of Jo et al. US 20150117650 A1 (hereinafter Jo), in further view of Ramprashad et al. US 20220394407 A1 (hereinafter Ramprashad). Regarding independent claim 1 and 18, Kim teaches an apparatus/method for spatial audio signal decoding (FIG. 4, 204) and rendering associated with a plurality of speaker nodes placed within a three dimensional space having virtual surface arrangement comprising a plurality of virtual surfaces (FIG. 15, 452) wherein each of the plurality of virtual surfaces has corners positioned at at least three speaker nodes (FIG. 15, 454, channel 1-3), wherein the virtual surface arrangement is defined at least in part by a virtual surface set comprising a plurality of virtual surfaces (FIG. 15, 452), wherein each of the plurality of virtual surfaces is each referenced by a reference means (FIG. 15, 452, examiner interprets triangles vertices as reference), and wherein the apparatus comprises at least one processor and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to ([0175] “Audio decoding device 22D may further include one or more processors electrically coupled to the memory”) determine an azimuth angle for each virtual surface of the virtual surface set (FIG. 7, 131, [0084] “loudspeaker position information 48 into bitstream 56A, bitstream generation unit 52A may encode (e.g., signal) the number of loudspeakers (e.g., N) in the source loudspeaker setup and the positions of the loudspeakers of the source loudspeaker setup in the form of an azimuth and an elevation”); arrange the virtual surfaces of the virtual surface set into an order based on the determined azimuth angles to give an ordered virtual surface set ([0040] “The hierarchical set of elements may refer to a set of elements in which the elements are ordered such that a basic set of lower-ordered elements provides a full representation of the modeled soundfield”); determine at least two search sectors, wherein each of the at least two search sectors occupies a range of azimuth angles (FIG. 7, 133, 134 [0109] “Columns 133 and 134 of table 130 specify acceptable ranges of azimuth angles for loudspeakers in degrees”); determine a search sector from the at least two search sectors based on the target azimuth angle (FIG. 7, 133-136); Kim fails to teach associate a virtual surface of the ordered virtual surface set to each of the at least two search sectors; obtain a target panning direction comprising at least a target azimuth angle; start from the associated virtual surface for the determined search sector, search the ordered virtual surface set to determine a virtual surface that encloses the target panning direction. However, Jo teaches associate a virtual surface of the ordered virtual surface set to each of the at least two search sectors ([0015] “selecting polygons existing within a certain range from the polygon selected with respect to the any one frame from among the plurality of polygons; and calculating distances from the changed location of the object sound only with respect to the selected polygons existing within the certain range”, here the examiner interprets polygons as the virtual surface); Kim in view of Jo are considered to be analogous to the claimed invention because both are the same field of spatial audio communication. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the techniques coding of higher-order ambisonic audio data of Kim with the technique of associating virtual surfaces with search sectors taught by Jo in order to improve a method and apparatus for generating a multi-channel audio signal corresponding to a location of an object sound (see Jo [0003]). Kim in view of Jo fails to teach obtain a target panning direction comprising at least a target azimuth angle; start from the associated virtual surface for the determined search sector, search the ordered virtual surface set to determine a virtual surface that encloses the target panning direction. However, Ramprashad teaches obtain a target panning direction comprising at least a target azimuth angle ([0007] “Panning limits, or a joint range of horizontal and vertical panning directions, may be determined based on orientation of the device. The orientation of the device may imply how the audio is located relative to horizontal and vertical spans of the device”); start from the associated virtual surface for the determined search sector, search the ordered virtual surface set to determine a virtual surface that encloses the target panning direction ([0113] “the session parameters may include location information (e.g., X, Y-coordinates) of the visual representation, with respect to the GUI and/or with respect to the display screen on which the GUI is displayed. The generator determines one or more panning ranges (e.g., an azimuth panning range, an elevation panning range, etc.) of one or more speakers (at block 83). Specifically, these angle ranges may correspond to the maximum (or minimum) range at which a virtual sound source may be (e.g., optimally) positions within space, such as azimuth panning ranges −φ−+ω and elevation panning ranges −φ−+β,”). Kim in view of Jo in view of Ramprashad are considered to be analogous to the claimed invention because all are the same field of spatial audio communication. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the techniques of coding of higher-order ambisonic audio data of Kim in view of Jo with the technique of obtaining panning directions taught by Ramprashad in order to improve how audio is spatialized in communication sessions (see Ramprashad [0001]). Regarding claim 2 and 19, Kim in view of Jo in view of Ramprashad teaches all of the limitations of claims 1 and 18, upon which claims 2 and 19 depend. Additionally, Kim teaches wherein the reference means is an index (FIG. 8, [0110] “audio encoding device 14 may specify a position of a loudspeaker in the source loudspeaker setup by signaling an index of an entry in table 140. For example, audio encoding device 14 may specify a loudspeaker in the source loudspeaker setup is at azimuth 1.967778 radians and elevation 0.428967 radians by signaling index value 46.”) Regarding claim 3 and 20, Kim in view of Jo in view of Ramprashad teaches all of the limitations of claims 2 and 19, upon which claims 3 and 20 depend. Additionally, Kim teaches wherein the apparatus caused to start from the associated virtual surface for the determined search sector, search the ordered virtual surface set to determine a virtual surface that encloses the target panning direction is further caused to: determine an initial search index for the determined search sector, wherein the initial search index is an index of the associated virtual surface for the determined search sector (FIG. 8, [0110] “audio encoding device 14 may specify a position of a loudspeaker in the source loudspeaker setup by signaling an index of an entry in table 140. For example, audio encoding device 14 may specify a loudspeaker in the source loudspeaker setup is at azimuth 1.967778 radians and elevation 0.428967 radians by signaling index value 46”); determine a set of panning gains for the at least three speaker nodes of the associated virtual surface for the determined search sector (FIG. 14, 406, 416; [0142] “gain determination unit 406 determines a set of gain factors 416. Each respective gain factor of the set of gain factors 416 corresponds to a respective loudspeaker of the source loudspeaker setup”); and determine that the associated virtual surface encloses the target panning direction when each panning gain is non-negative of the set of panning gains for the at least three speaker nodes of the associated virtual surface for the determined sector ([0147] “the gain factors are not permitted to be negative”; [0143] “the gain factors applied to an audio signal output by three speakers trick a listener into perceiving that the audio signal is coming from a virtual source position 450 located within an active triangle 452 between the three loudspeakers”). Regarding claim 5, Kim in view of Jo in view of Ramprashad teaches all of the limitations of claim 1, upon which claim 5 depends. Additionally, Kim teaches wherein each of the plurality of virtual surfaces is defined by at least three vectors each pointing to one of the at least three speaker nodes, wherein the apparatus caused to determine an azimuth angle for each virtual surface of the virtual surface set is caused to: determine, for each virtual surface, a vector sum of the at least three vectors ([0149] “where vector finalization unit 404 determines spatial vector 418 using Equation (37), spatial vector 418 is equivalent to a sum of a plurality of operands. Each respective operand of the plurality of operands corresponds to a respective loudspeaker location of the plurality of loudspeaker locations”); and Additionally, Ramprashad teaches determine the azimuth angle, for each virtual surface, as an angle of the vector sum projected onto a x-y plane ([0070] “the virtual sound sources being spread about a two-dimensional (2D) XY-plane”). Regarding claim 6, Kim in view of Jo in view of Ramprashad teaches all of the limitations of claim 1, upon which claim 6 depends. Additionally, Ramprashad teaches wherein an azimuth angle for the associated virtual surface is a border angle for the determined search sector, wherein the apparatus caused to start from the associated virtual surface for the determined search sector, search the ordered virtual surface set to determine a virtual surface that encloses the target panning direction is further caused to: determine whether the target azimuth angle is less than the azimuth angle for the associated virtual surface azimuth angle ([0139] “Since the height of the session GUI extends the height of the display screen, the elevation panning angles remain the same, whereas, since the width of the session GUI is shorter than the width of the display screen, the azimuth panning range is reduced to −θ.sub.w−+ω.sub.w, which is less than the total azimuth panning range”); when the target azimuth angle is less than the azimuth angle for the associated virtual surface the apparatus is further caused to determine that the associated virtual surface encloses the target panning direction and determine a set of panning gains for the at least three speaker nodes of the associated virtual surface for the determined search sector ([0140] “when the GUI 41 has a larger aspect ratio, the defined azimuth panning range may equal to the total panning range, whereas the elevation panning range is reduced to −ϕw −+βw, which is less than the total elevation panning range”); and when the target azimuth angle is not less than the azimuth angle for the associated virtual surface the apparatus is further caused to determine that when the target azimuth angle is less than a border azimuth angle for a further virtual surface of the ordered virtual surface set that the further virtual surface encloses the target panning direction and determine a set of panning gains for the at least three speaker nodes of the further virtual surface ([0162] “when the width of the GUI is the width of the display screen, the azimuth panning range is the total azimuth panning range, and the elevation panning range is less than the total elevation panning range, when the height of the GUI is the height of the display screen, the azimuth panning range is less than the total azimuth panning range, and the elevation panning range is the total elevation panning range”). Regarding claim 7, Kim in view of Jo in view of Ramprashad teaches all of the limitations of claim 1, upon which claim 7 depends. Additionally, Kim teaches wherein each of the plurality of virtual surfaces is defined by at least three vectors each pointing to one of the at least three speaker nodes, wherein the apparatus caused to determine an azimuth angle for each virtual surface of the virtual surface set is caused to: determine, for each virtual surface, a first azimuth angle of a first of the at least three vectors ([0109] “Column 131 of table 130 specifies ideal azimuths for loudspeakers in degrees. Column 132 of table 130 specifies ideal elevations for loudspeakers in degrees.”); determine, for each virtual surface, a second azimuth angle of a second of the at least three vectors (FIG. 7, 131); and select the azimuth angle for each virtual surface as the larger of the first azimuth angle and the second azimuth angle ([0144] “The desired direction Ω=(θ, φ) of the audio object may be given as azimuth angle φ and elevation angle θ. The unity length position vector p(Ω) of the virtual source in Cartesian coordinates”). Regarding claim 8, Kim in view of Jo in view of Ramprashad teaches all of the limitations of claim 1, upon which claim 8 depends. Additionally, Ramprashad teaches wherein the apparatus is further caused to: obtain an elevation angle for a horizontal plane within the three-dimensional space, wherein a number of the plurality of speaker nodes are situated on the horizontal plane ([0111] “the azimuth panning range 100 includes azimuth angles of the four virtual sound sources 53-54 (or L.sub.1-L.sub.4, respectively) relative (or at) the reference point 99, along the horizontal X-axis. Specifically, the reference point is the vertex of each angle and each azimuth panning angle extends away from a 0° reference axis, the Z-axis, (e.g., or towards either −φ or +ω), along the horizontal X-axis. Similarly, the elevation panning range 101 shows each of the elevation angles for the four virtual sound sources relative to the reference point”); and create an elevation angle range between a minimum elevation angle and the elevation angle for the horizontal plane ([0120] “the reference point, the generator determines the elevation viewing angle for L1 as +β′L1 and the elevation viewing angle for L2 as −ϕ′L2. To determine the actual elevation angles, the generator applies the elevation viewing angles as input for the elevation function 119 of the elevation panning range −ϕ−+β with respect to the elevation viewing range −ϕ′−+β”). Regarding claim 9, Kim in view of Jo in view of Ramprashad teaches all of the limitations of claim 8, upon which claim 9 depends. Additionally, Ramprashad teaches wherein the apparatus is further caused to: create a further elevation angle range between the elevation angle for the horizontal plane and a maximum elevation angle (these angle ranges may correspond to the maximum (or minimum) range at which a virtual sound source may be (e.g., optimally) positions within space, such as azimuth panning ranges −φ−+ω and elevation panning ranges −φ−+β, shown in FIG. 8). Regarding claim 10, Kim in view of Jo in view of Ramprashad teaches all of the limitations of claim 8, upon which claim 10 depends. Additionally, Ramprashad teaches wherein the apparatus is further caused to: obtain an elevation angle for a further horizontal plane within the three-dimensional space, wherein a further number of the plurality of speaker nodes are situated on the further horizontal plane ([0110] “FIG. 7, along with azimuth and elevation panning angles for each source and a distance between the sources and a reference point. As shown, the arrangement 62 is bounded by panning ranges of the one or more speakers that are used to output the virtual sources”); and create a further elevation angle range between the elevation angle for the horizontal plane and the elevation angle for the further horizontal plane ([0120] “the reference point, the generator determines the elevation viewing angle for L1 as +β′L1 and the elevation viewing angle for L2 as −ϕ′L2. To determine the actual elevation angles, the generator applies the elevation viewing angles as input for the elevation function 119 of the elevation panning range −ϕ−+β with respect to the elevation viewing range −ϕ′−+β′”). Regarding claim 11, Kim in view of Jo in view of Ramprashad teaches all of the limitations of claim 10, upon which claim 11 depends. Additionally, Ramprashad teaches wherein the apparatus is further caused to: create a yet further elevation angle range between the elevation angle for the further horizontal plane and a maximum elevation angle ([0138] “this stage also illustrates the (e.g., azimuth and elevation) panning ranges of the speakers of the local device. As shown, the total panning ranges (e.g., the maximum angles at which virtual sound sources may be positioned when respective input audio streams are spatial rendered using the speakers of the local device) extend to the edges of the display screen”). Regarding claim 12, Kim in view of Jo in view of Ramprashad teaches all of the limitations of claim 8, upon which claim 12 depends. Additionally, Ramprashad teaches wherein the apparatus is further caused to: assign the virtual surface set to one of; the elevation angle range, the further elevation angle range and yet further elevation angle range by mapping an elevation angle associated with the virtual surface set to one of; the elevation angle range, the further elevation angle range and yet further elevation angle range ([0007] “In one aspect, the joint horizontal and vertical panning limits may be a function of orientation. Individual or mixed sound sources will have virtual azimuth and elevation angles within that range, where the mapping from the locations of visual representations uses this range to define the function for the mapping”). Regarding claim 13, Kim in view of Jo in view of Ramprashad teaches all of the limitations of claim 12, upon which claim 13 depends. Additionally, Ramprashad teaches wherein the target panning direction further comprises a target elevation angle, and wherein the apparatus is further caused to: determine that the target elevation angle lies within one of: the elevation angle range, the further elevation angle range and yet further elevation angle range to give a determined elevation range ([0117] “to determine an elevation panning angle for a virtual sound source, the generator may use a y-coordinate of a location of the visual representation as input into a (e.g., separate) function of elevation panning range”). Regarding claim 14, Kim in view of Jo in view of Ramprashad teaches all of the limitations of claim 8, upon which claim 14 depends. Additionally, Jo teaches wherein the plurality of virtual surfaces with corners positioned at at least three speaker nodes of the plurality of speaker nodes have sides connecting pairs of corners configured to be non-intersecting with the horizontal plane within the three-dimensional space (FIG. 6; FIG. 9). Regarding claim 15, Kim in view of Jo in view of Ramprashad teaches all of the limitations of claim 10, upon which claim 15 depends. Additionally, Jo teaches wherein the plurality of virtual surfaces with corners positioned at at least three speaker nodes have sides connecting pairs of corners configured to be non-intersecting with the further horizontal plane within the three-dimensional space (FIG. 5, 6, and 9). Regarding claim 16, Kim in view of Jo in view of Ramprashad teaches all of the limitations of claim 1, upon which claim 16 depends. Additionally, Jo teaches wherein the order of virtual surfaces of the virtual surface set is an increasing order of the determined azimuth angles of the virtual surfaces (FIG. 7 & 8). Regarding claim 17, Kim in view of Jo in view of Ramprashad teaches all of the limitations of claim 1, upon which claim 17 depends. Additionally, Kim teaches wherein a virtual surface is a loudspeaker triplet comprising three vectors each pointing to a corner of the loudspeaker triplet (FIG. 15, [0144] “where three loudspeakers are used for each audio object, the three loudspeakers are arranged in a triangle to form a vector base. Each vector base is identified by the loudspeaker numbers k, m, n and the loudspeaker position vectors I.sub.k, I.sub.m, and I.sub.n given in Cartesian coordinates normalized to unity length”). Allowable Subject Matter Claim is 4 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kerdranvat et al. (US 20180376268 A1) teaches a method and an apparatus for detecting loudspeaker connection errors and positioning errors during calibration of a multichannel audio system to which a plurality of loudspeakers is connected. Within a calibration process of a multichannel audio system, the loudspeaker whose angle is to be measured is identified by emitting a test tone (451) and verifying (460) the conformance between angles measured and a range of acceptable angles for each loudspeaker. A positioning error is detected when the measured angle is not included in the range of acceptable angles but in the range of acceptable angles of the closest speaker. A connection error is detected when the measured angle is very different from the range of acceptable angles. In case of errors, a recommendation is expressed (470) to the user in order to make the appropriate corrections. A calibration device (100) and an audio processing device (120) implementing the method are disclosed. Jot et al. (US 20090092259 A1) teaches a two-channel phase-amplitude stereo encoding and decoding scheme enabling flexible and spatially accurate interactive 3-D audio reproduction via standard audio-only two-channel transmission. The encoding scheme allows associating a 2-D or 3-D positional localization to each of a plurality of sound sources by use of frequency independent inter-channel phase and amplitude differences. The decoder is based on frequency-domain spatial analysis of 2-D or 3-D directional cues in a two-channel stereo signal and re-synthesis of these cues using any preferred spatialization technique, thereby allowing faithful reproduction of positional audio cues and reverberation or ambient cues over arbitrary multi-channel loudspeaker reproduction formats or over headphones, while preserving source separation despite the intermediate encoding over only two audio channels. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZEESHAN SHAIKH whose telephone number is (703)756-1730. The examiner can normally be reached Monday-Friday 7:30AM-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at (571) 272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ZEESHAN MAHMOOD SHAIKH/Examiner, Art Unit 2658 /RICHEMOND DORVIL/Supervisory Patent Examiner, Art Unit 2658
Read full office action

Prosecution Timeline

Jul 15, 2024
Application Filed
Mar 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579373
SYSTEM AND METHOD FOR SYNTHETIC TEXT GENERATION TO SOLVE CLASS IMBALANCE IN COMPLAINT IDENTIFICATION
2y 5m to grant Granted Mar 17, 2026
Patent 12555575
Wakeup Indicator Monitoring Method, Apparatus and Electronic Device
2y 5m to grant Granted Feb 17, 2026
Patent 12518090
LOGICAL ROLE DETERMINATION OF CLAUSES IN CONDITIONAL CONSTRUCTIONS OF NATURAL LANGUAGE
2y 5m to grant Granted Jan 06, 2026
Patent 12511318
MULTI-SYSTEM-BASED INTELLIGENT QUESTION ANSWERING METHOD AND APPARATUS, AND DEVICE
2y 5m to grant Granted Dec 30, 2025
Patent 12512088
METHOD AND SYSTEM FOR USER-INTERFACE ADAPTATION OF TEXT-TO-SPEECH SYNTHESIS
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
52%
Grant Probability
99%
With Interview (+55.0%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 31 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month