Prosecution Insights
Last updated: April 19, 2026
Application No. 18/011,829

INFORMATION PROCESSING DEVICE, OUTPUT CONTROL METHOD, AND PROGRAM

Final Rejection §103
Filed
Dec 20, 2022
Examiner
KURR, JASON R
Art Unit
2695
Tech Center
2600 — Communications
Assignee
Sony Group Corporation
OA Round
4 (Final)
75%
Grant Probability
Favorable
5-6
OA Rounds
2y 6m
To Grant
96%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
524 granted / 697 resolved
+13.2% vs TC avg
Strong +21% interview lift
Without
With
+20.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
23 currently pending
Career history
720
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
42.9%
+2.9% vs TC avg
§102
31.3%
-8.7% vs TC avg
§112
15.4%
-24.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 697 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Goo (US 20200366990 A1) in view of Tomlin et al (US 20190289417 A1) and in further view of Yadegari (US 20200260209 A1). With respect to claim 1, Goo discloses an information processing device comprising: circuitry (fig.11 #300) configured to cause at least one speaker (fig.11/17 #400) that is installed in a space to output sound of a sound source of a static object which constitutes audio of a content (Par.[0244-0245][0247] as shown in the embodiment of figure 17, speakers #400 output multi-channel audio such as front-left, front-right and surround signals, wherein the content may be that of a background sound or static sound); and an output device (fig.11/17 #100) provided for each listener to output sound of a virtual sound source of an audio object different from the sound source (Par.[0247] output device #100 or “headphones” may output virtual object sounds, such as sounds related to virtual vehicle #17b), wherein the sound of the virtual sound source is generated corresponding to a sound source position (Par.[0173][0183][0247-0248][0252] multi-channel sound implementation device #300 provides different sound signals to both speakers #400 and open-ear headphones #100; wherein sound signals provided to speakers #400 may be background objects and sound signals provided to headphones #100 may be interactive sound objects that correspond to a source position, such as shown by the virtual sound source #17b (vehicle) in figure 17). Goo does not disclose expressly wherein the static object has a fixed sound source position (Official Notice is taken that it is well-known in the art that background sounds may have a fixed position or “static object”. It would have been obvious before the effective filing date to a person ordinary skill in the art to include static sounds in the background sounds of Goo. The motivation for doing so would have been to reproduce background sounds that are not moving). Goo discloses wherein an HRTF may be used to generate an immersive sound for the virtual sound source (Par.[0077]); however does not disclose expressly wherein audio object is a dynamic audio object that has a moving sound source position. Tomlin discloses an output device (fig.1 #116,118) provided for a listener to output sound of a virtual sound source of a dynamic object (Par.[0017] dynamic audio objects #120 may be any object, real or virtual), wherein the dynamic object has a moving sound source position (Par.[0068] the dynamic audio objects may emit audio from an arbitrary position in space, which may change over time, based on a position of the dynamic audio object itself), wherein the sound of the virtual sound source of the dynamic object is generated by processing using a head related transfer function (HRTF) corresponding to a sound source position described by a prescribed direction and distance relative to the listener (Par.[0028] “HRTF”). It would have been obvious before the effective filing date of the present invention to a person of ordinary skill in the art to use the dynamic object processing of Tomin to process the audio objects of Goo. The motivation for doing so would have been to account for moving objects, such as a movement of the virtual vehicle #17b of Goo. The combination of Goo an Tomlin does not disclose expressly wherein a transfer function comprising a plurality of HRTF spheres corresponding to a sound source position described by a prescribed direction and distance relative to the listener, and an HRTF sphere used for localization of the virtual sound source switched by cross-fade processing from one HRTF sphere to another HRTF sphere to reproduce sound sources that travel in a depth- wise direction relative to the listener. Yadegari discloses an information processing device and method of generating sound of a virtual sound source by processing using a transfer function comprising a plurality of HRTF spheres corresponding to the sound source position described by a prescribed direction and distance relative to the listener (Par.[0103] As shown in figures 16-18, a virtual source #1601,1701,1801 may be localized using a plurality of HRTF spheres (#1611,1613,1615,1617)(#1711,1713,1715,1717)(#1811,1813,1815,1817); Par.[0042] Use of a “sphere” shape around a head of a listener is known; Par.[0106] HRTF measurements may include elevations, wherein an elevation in addition to direction comprise the use of a sphere; and where a location of the source relative to a listener is defined by a direction and distance) wherein a transfer function comprising a plurality of HRTF spheres corresponding to a sound source position described by a prescribed direction and distance relative to the listener, and an HRTF sphere used for localization of the virtual sound source switched by cross-fade processing from one HRTF sphere to another HRTF sphere to reproduce sound sources that travel in a depth- wise direction relative to the listener (Par.[0105-0106] As shown in the embodiment of figure 18 of Yadegari, when a virtual sound source #1801 lies at a position between known HRTF Spheres #1811,1813; points on the known spheres may be used to interpolate for position #1801 of the virtual sound source; wherein a cross-fade operation is a form of linear interpolation). It would have been obvious before the effective filing date of the present invention to a person of ordinary skill in the art to use the HRTF processing of virtual sound sources, provided by Yadegari, to generate the virtual sound sources of Goo and Tomlin. The motivation for doing so would have been to virtually position the sound sources provided to the headphones according to control data, thereby positioning the virtual sound sources at a desired location within the sound field. With respect to claim 2, Goo discloses the information processing device according to claim 1, wherein the output control unit causes headphones (#100) as the output device worn by the listener to output the sound of the virtual sound source at the prescribed direction and distance relative to the listener (Par.[0245][0247] virtual object sounds may be output to the open-ear headphones #100), wherein the headphones can capture outside sound (Par.[0067] “microphone”). With respect to claim 3, Goo discloses the information processing device according to claim 2, wherein the content includes video image data and sound data, and the output controller causes the headphones to output the sound of the virtual sound source having a sound source position within the prescribed direction and distance relative to the listener from the position of a character included in the video image data (Par.[0245][0248] headphones #100 may be a VR/AR device comprising a display #17c; wherein images of a virtual character, such as vehicle #17b, are provided of the display. Virtual sounds from vehicle #17b are output when the vehicle is present within a range, such as being inside a sound zone). With respect to claim 4, Goo discloses the information processing device according to claim 2, wherein the output controller causes the speaker to output channel-based sound and the headphones to output object-based sound of the virtual sound source, within the prescribed direction and distance relative to the user (Par.[0183][0247]). With respect to claim 5, Goo discloses the information processing device according to claim 2, wherein the speakers output background sounds and the headphones output virtual sound sources (Par.[0247]); however does not disclose expressly wherein the output control unit causes the speaker to output sound of a static object and the headphones to output sound of the virtual sound source of a dynamic object, including changes in direction and distance relative to the listener (See Rejection Claim 1). With respect to claim 6, Goo discloses the information processing device according to claim 2, wherein the output controller causes the speaker to output common sound to be heard by a plurality of the listeners (Par.[0247] “background sound”) and the headphones to output sound to be heard by each of the listeners while changing the direction and distance of a sound source depending on the position of the listener (Par.[0219][0242] headphone audio signals are generated based on a location or position of the headphones). With respect to claim 7, Goo discloses the information processing device according to claim 2, wherein the output control unit causes the speaker to output sound having a sound source position at a height equal to the height of the speaker and the headphones to output sound of the virtual sound source having a sound source position at a height different from the height and distance of the speaker (As shown in figure 17, speakers #400 and headphones #100 output sound at differing heights). With respect to claim 8, Goo discloses the information processing device according to claim 2, wherein the output control unit causes the headphones to output sound of the virtual sound source having a sound source position apart from the speaker, including the prescribed direction and distance relative to the listener (Par.[0247-0248] See fig.17). With respect to claim 9, Goo and Tomlin discloses the information processing device according to claim 1 however does not disclose expressly wherein a plurality of the virtual sound sources are mapped onto multiple HRTF spheres, Yadegari discloses an information processing device and method of generating sound of a virtual sound source by processing using a transfer function comprising a plurality of HRTF spheres corresponding to the sound source position described by a prescribed direction and distance relative to the listener (Par.[0103] As shown in figure 16, a virtual source #1601 may be localized using a plurality of HRTF spheres; Par.[0042] Use of a “sphere” shape around a head of a listener is known; Par.[0106] HRTF measurements may include elevations, wherein an elevation in addition to direction comprise the use of a sphere; recorded at different distances #1611,1613,1615, 1617; where a location of the source relative to a listener is defined by a direction and distance) with the surface of each sphere having different distance from a common reference position as a center, the information processing device further comprising a storage unit that stores information about the transfer function corresponding to the reference position in each of the virtual sound sources in each of the HRTF spheres (See: Yadegari Par. [0103]). It would have been obvious before the effective filing date of the present invention to a person of ordinary skill in the art to use the HRTF processing of virtual sound sources, provided by Yadegari, to generate the virtual sound sources of Goo and Tomlin. The motivation for doing so would have been to virtually position the sound sources provided to the headphones according to control data, thereby positioning the virtual sound sources at a desired location within the sound field. With respect to claim 10, Goo discloses the information processing device according to claim 9 in view of Tomlin and Yadegari, wherein the layers of the virtual sound sources are provided by arranging the plurality of virtual sound sources in each of a plurality of full sphere shapes (Yadegari: see figs. 16-18, Par.[0042] “sphere” ). With respect to claim 11, Goo discloses the information processing device according to claim 9 in view of Tomlin and Yadegari, wherein the virtual sound sources in each HRTF sphere are equally spaced along longitudinal and latitudinal lines and use the same longitudinal and latitudinal coordinates to map direction relative to the listener, wherein each HRTF sphere represents a different distance relative to the listener (Yadegari: see figs. 16-18). With respect to claim 12, Goo discloses the information processing device according to claim 9 in view of Tomlin and Yadegari, wherein the plurality of layers of the virtual sound sources include a layer of the virtual sound sources each having the transfer function adjusted for the listeners within the prescribed direction and distance relative to the listener (See: Yadegari Par. [0103]). With respect to claim 13, Goo discloses the information processing device according to claim 9 in view of Tomlin and Yadegari, further comprising a sound localization processor which applies the transfer function from the appropriate HRTF spheres to an audio signal as a processing target and generates sound of the virtual sound source (See: Yadegari Par. [0103]). With respect to claim 14, Goo discloses the information processing device according to claim 13 in view of Tomlin and Yadegari, wherein the sound localization processor switches sound to be output from the output device from sound of the virtual sound source in a prescribed HRTF sphere to sound of the virtual sound source in another HRTF sphere (Yadegari: Par.[0103-0106]). With respect to claim 15, Goo discloses the information processing device according to claim 14 in view of Tomlin and Yadegari, wherein the output controller causes the output device to output the sound of the virtual sound source in the prescribed layer and the sound of the virtual sound source in the other layer generated on the basis of the audio signal having a gain adjusted prior to entering the convolution processor (Yadegari: Par.[0103-0106]). With respect to claim 16, Goo discloses an output control method causing an information processing device to: cause at least one speaker (fig.11/17 #400) that is installed in a space to output sound of a sound source of a static object which constitutes audio of a content (Par.[0244-0245][0247] as shown in the embodiment of figure 17, speakers #400 output multi-channel audio such as front-left, front-right and surround signals, wherein the content may be that of a background sound or static sound); and cause an output device (fig.11/17 #100) provided for each listener to output sound of a virtual sound source of an audio object different from the sound source (Par.[0247] output device #100 or “headphones” may output virtual object sounds, such as sounds related to virtual vehicle #17b), wherein the sound of the virtual sound source is generated corresponding to a sound source position (Par.[0173][0183][0247-0248][0252] multi-channel sound implementation device #300 provides different sound signals to both speakers #400 and open-ear headphones #100; wherein sound signals provided to speakers #400 may be background objects and sound signals provided to headphones #100 may be interactive sound objects that correspond to a source position, such as shown by the virtual sound source #17b (vehicle) in figure 17). Goo does not disclose expressly wherein the static object has a fixed sound source position (Official Notice is taken that it is well-known in the art that background sounds may have a fixed position or “static object”. It would have been obvious before the effective filing date to a person ordinary skill in the art to include static sounds in the background sounds of Goo. The motivation for doing so would have been to reproduce background sounds that are not moving). Goo discloses wherein an HRTF may be used to generate an immersive sound for the virtual sound source (Par.[0077]); however does not disclose expressly wherein audio object is a dynamic audio object that has a moving sound source position. Tomlin discloses an output device (fig.1 #116,118) provided for a listener to output sound of a virtual sound source of a dynamic object (Par.[0017] dynamic audio objects #120 may be any object, real or virtual), wherein the dynamic object has a moving sound source position (Par.[0068] the dynamic audio objects may emit audio from an arbitrary position in space, which may change over time, based on a position of the dynamic audio object itself), wherein the sound of the virtual sound source of the dynamic object is generated by processing using a head related transfer function (HRTF) corresponding to a sound source position described by a prescribed direction and distance relative to the listener (Par.[0028] “HRTF”). It would have been obvious before the effective filing date of the present invention to a person of ordinary skill in the art to use the dynamic object processing of Tomin to process the audio objects of Goo. The motivation for doing so would have been to account for moving objects, such as a movement of the virtual vehicle #17b of Goo. The combination of Goo an Tomlin does not disclose expressly wherein a transfer function comprising a plurality of HRTF spheres corresponding to a sound source position described by a prescribed direction and distance relative to the listener, and an HRTF sphere used for localization of the virtual sound source switched by cross-fade processing from one HRTF sphere to another HRTF sphere to reproduce sound sources that travel in a depth- wise direction relative to the listener. Yadegari discloses an information processing device and method of generating sound of a virtual sound source by processing using a transfer function comprising a plurality of HRTF spheres corresponding to the sound source position described by a prescribed direction and distance relative to the listener (Par.[0103] As shown in figures 16-18, a virtual source #1601,1701,1801 may be localized using a plurality of HRTF spheres (#1611,1613,1615,1617)(#1711,1713,1715,1717)(#1811,1813,1815,1817); Par.[0042] Use of a “sphere” shape around a head of a listener is known; Par.[0106] HRTF measurements may include elevations, wherein an elevation in addition to direction comprise the use of a sphere; and where a location of the source relative to a listener is defined by a direction and distance) wherein a transfer function comprising a plurality of HRTF spheres corresponding to a sound source position described by a prescribed direction and distance relative to the listener, and an HRTF sphere used for localization of the virtual sound source switched by cross-fade processing from one HRTF sphere to another HRTF sphere to reproduce sound sources that travel in a depth- wise direction relative to the listener (Par.[0105-0106] As shown in the embodiment of figure 18 of Yadegari, when a virtual sound source #1801 lies at a position between known HRTF Spheres #1811,1813; points on the known spheres may be used to interpolate for position #1801 of the virtual sound source; wherein a cross-fade operation is a form of linear interpolation). It would have been obvious before the effective filing date of the present invention to a person of ordinary skill in the art to use the HRTF processing of virtual sound sources, provided by Yadegari, to generate the virtual sound sources of Goo and Tomlin. The motivation for doing so would have been to virtually position the sound sources provided to the headphones according to control data, thereby positioning the virtual sound sources at a desired location within the sound field. With respect to claim 17, Goo discloses a non-transitory computer readable medium comprising control logic which, upon execution by the processor causes: (Par.[0068] processor #110 executes instructions or a program from controlling the overall functions of headphones #100 and control unit #310 controls overall functions of the system): cause at least one speaker (fig.11/17 #400) that is installed in a space to output sound of a sound source of a static object which constitutes audio of a content (Par.[0244-0245][0247] as shown in the embodiment of figure 17, speakers #400 output multi-channel audio such as front-left, front-right and surround signals, wherein the content may be that of a background sound or static sound); and an output device (fig.11/17 #100) provided for each listener to output sound of a virtual sound source of an audio object different from the sound source (Par.[0247] output device #100 or “headphones” may output virtual object sounds, such as sounds related to virtual vehicle #17b), wherein the sound of the virtual sound source is generated corresponding to a sound source position (Par.[0173][0183][0247-0248][0252] multi-channel sound implementation device #300 provides different sound signals to both speakers #400 and open-ear headphones #100; wherein sound signals provided to speakers #400 may be background objects and sound signals provided to headphones #100 may be interactive sound objects that correspond to a source position, such as shown by the virtual sound source #17b (vehicle) in figure 17). Goo does not disclose expressly wherein the static object has a fixed sound source position (Official Notice is taken that it is well-known in the art that background sounds may have a fixed position or “static object”. It would have been obvious before the effective filing date to a person ordinary skill in the art to include static sounds in the background sounds of Goo. The motivation for doing so would have been to reproduce background sounds that are not moving). Goo discloses wherein an HRTF may be used to generate an immersive sound for the virtual sound source (Par.[0077]); however does not disclose expressly wherein audio object is a dynamic audio object that has a moving sound source position. Tomlin discloses an output device (fig.1 #116,118) provided for a listener to output sound of a virtual sound source of a dynamic object (Par.[0017] dynamic audio objects #120 may be any object, real or virtual), wherein the dynamic object has a moving sound source position (Par.[0068] the dynamic audio objects may emit audio from an arbitrary position in space, which may change over time, based on a position of the dynamic audio object itself), wherein the sound of the virtual sound source of the dynamic object is generated by processing using a head related transfer function (HRTF) corresponding to a sound source position described by a prescribed direction and distance relative to the listener (Par.[0028] “HRTF”). It would have been obvious before the effective filing date of the present invention to a person of ordinary skill in the art to use the dynamic object processing of Tomin to process the audio objects of Goo. The motivation for doing so would have been to account for moving objects, such as a movement of the virtual vehicle #17b of Goo. The combination of Goo an Tomlin does not disclose expressly wherein a transfer function comprising a plurality of HRTF spheres corresponding to a sound source position described by a prescribed direction and distance relative to the listener, and an HRTF sphere used for localization of the virtual sound source switched by cross-fade processing from one HRTF sphere to another HRTF sphere to reproduce sound sources that travel in a depth- wise direction relative to the listener. Yadegari discloses an information processing device and method of generating sound of a virtual sound source by processing using a transfer function comprising a plurality of HRTF spheres corresponding to the sound source position described by a prescribed direction and distance relative to the listener (Par.[0103] As shown in figures 16-18, a virtual source #1601,1701,1801 may be localized using a plurality of HRTF spheres (#1611,1613,1615,1617)(#1711,1713,1715,1717)(#1811,1813,1815,1817); Par.[0042] Use of a “sphere” shape around a head of a listener is known; Par.[0106] HRTF measurements may include elevations, wherein an elevation in addition to direction comprise the use of a sphere; and where a location of the source relative to a listener is defined by a direction and distance) wherein a transfer function comprising a plurality of HRTF spheres corresponding to a sound source position described by a prescribed direction and distance relative to the listener, and an HRTF sphere used for localization of the virtual sound source switched by cross-fade processing from one HRTF sphere to another HRTF sphere to reproduce sound sources that travel in a depth- wise direction relative to the listener (Par.[0105-0106] As shown in the embodiment of figure 18 of Yadegari, when a virtual sound source #1801 lies at a position between known HRTF Spheres #1811,1813; points on the known spheres may be used to interpolate for position #1801 of the virtual sound source; wherein a cross-fade operation is a form of linear interpolation). It would have been obvious before the effective filing date of the present invention to a person of ordinary skill in the art to use the HRTF processing of virtual sound sources, provided by Yadegari, to generate the virtual sound sources of Goo and Tomlin. The motivation for doing so would have been to virtually position the sound sources provided to the headphones according to control data, thereby positioning the virtual sound sources at a desired location within the sound field. Response to Arguments Applicant's arguments filed October 14, 2025 have been fully considered but they are not persuasive. Regarding independent claims 1, 16 and 17, it appears that the applicant is arguing that the interpolation of data points between the HRTF spheres of Yadegari is not the same as the cross-fade processing from one HRTP sphere to another HRTF sphere as provided in the present claim language. The Examiner disagrees and maintains that the interpolation between HRTF spheres #1811 and #1813 of figure 18 of Yadegari is a linear interpolation between points #1803C and #1803D; and between points #1803B and #1803E to arrive at a transfer function for the location of sound source #1801. Wherein the linear interpolation of data points is a form of cross-fade processing between the HRTF spheres #1811 and #1813. The present claim language does not provide any details regarding the claimed “cross-fade processing” to suggest otherwise. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASON R KURR whose telephone number is (571)270-5981. The examiner can normally be reached M-F: 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached on (571-272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JASON R. KURR Primary Examiner Art Unit 2695 /JASON R KURR/Primary Examiner, Art Unit 2695
Read full office action

Prosecution Timeline

Dec 20, 2022
Application Filed
Oct 29, 2024
Non-Final Rejection — §103
Jan 30, 2025
Response Filed
Feb 25, 2025
Final Rejection — §103
Apr 25, 2025
Response after Non-Final Action
May 27, 2025
Request for Continued Examination
May 28, 2025
Response after Non-Final Action
Jun 02, 2025
Non-Final Rejection — §103
Oct 14, 2025
Response Filed
Oct 23, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603077
ACTIVE SOUND GENERATION DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12598439
POWER FAULT RECOVERY MECHANISM FOR AN AUDIO SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12598424
Zoned Audio Duck For In Car Conversation
2y 5m to grant Granted Apr 07, 2026
Patent 12598414
System And Method For Generating An Audio Signal
2y 5m to grant Granted Apr 07, 2026
Patent 12597411
SYSTEMS AND METHODS FOR VIRTUAL MICROPHONES IN ACTIVE NOISE CANCELLATION
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
75%
Grant Probability
96%
With Interview (+20.6%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 697 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month