Prosecution Insights
Last updated: April 19, 2026
Application No. 19/113,078

AR GLASSES AND AUDIO ENHANCING METHOD AND DEVICE THEREFOR, AND READABLE STORAGE MEDIUM

Non-Final OA §103
Filed
Mar 19, 2025
Examiner
OSORIO, RICARDO
Art Unit
2625
Tech Center
2600 — Communications
Assignee
Goertek Inc.
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
97%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
723 granted / 813 resolved
+26.9% vs TC avg
Moderate +8% lift
Without
With
+8.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
21 currently pending
Career history
834
Total Applications
across all art units

Statute-Specific Performance

§101
1.8%
-38.2% vs TC avg
§103
43.1%
+3.1% vs TC avg
§102
26.1%
-13.9% vs TC avg
§112
7.0%
-33.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 813 resolved cases

Office Action

§103
DETAILED ACTION po Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nagar et al. (US 2021/0281956) in view of Liu et al. (CN 112986914A). As to claims 1 and 7, Nagar discloses, an audio enhancing device and method for AR glasses worn by a user (Fig. 2, (210)[0021]in a surrounding environment [0033]comprising: a sound source distribution detecting unit [0021] configured for detecting a distribution of sound sources in the surrounding environment from different directions using a microphone array (Fig. 2, (211); determines a gaze direction of a user [0021, 0024, 0025], filters the filtered sound on the basis of the gaze direction of the user by means of the beamforming microphone array [0021, 0024, 0025], and outputs the filtered sound by means of a bone conduction hearing device configured for obtaining, enhancing and outputting the enhanced audio signal [0024-0026], . However, Nagar does not specifically disclose a detected sound source position of each sound if marked on a lens or the AR glasses by locking a target sound source on the basis of an eye gazing direction of the wearer of the AR glasses; extracting, from an audio signal received by a microphone array an audio component related to a voiceprint feature of the target sound source for enhancement processing. Liu, discloses a target sound source and voice recognition method for an individual soldier helmet (see Abstract) comprising: collecting a sound signal of a surround environment (“S1: collecting the sound signal of the surrounding environment”); acquiring a relative sound source position of the sound signal and a sound source category of the sound signal on the basis of the voiceprint characteristic of the sound signal (“S2: obtaining the relative sound source position of the sound signal and the sound source type of the sound signal according to the sound signal; wherein the relative sound source position is the sound source of the sound signal relative to the geographic position of the individual helmet”; “a voice print identifying module, the voice print identifying module comprises an extracting unit and an identifying unit”); acquiring position data of the helmet body (“S3: obtaining the position data of the helmet main body”), and acquiring an actual sound source position of the sound signal on the basis of the position data and the relative source position (“wherein the actual sound source position is the geographical position of the sound source of the sound signal relative to the geodetic coordinate system”); and AR eyepiece displaying the actual sound source position and the sound source category (“the AR eyepiece, for displaying the actual sound source position and the sound source type.(“the AR eyepiece, for displaying the actual sound source position and the sound source type”). The individual soldier helmet comprises: a helmet body (“the orientation of the sound source and the orientation of the helmet body are indicated”), a microphone array module (“microphone array receives the sound signal”), and an AR eyepiece (“directly displaying result by AR eyepiece fixedly connected with the helmet body”). Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to display the position of the sound source on the AR eyepiece, as taught by Liu, in the device of Nagar, thereby making the real time of the sound localization technology higher, the position prompt of the sound source target intuitive and the accuracy rate of the voiceprint recognition high. Also, extracting, from an audio signal received by a microphone array, an audio component related to a voiceprint feature of a target sound source for enhancement processing is customary in the art. As to claim 2, further, Nagar, does not specifically disclose when the user is at a first position, obtaining a first direction line for each detected sound source according to sound signals of different intensities picked up by each microphone in the microphone array; when the user is at a second position, obtaining a second direction line for each detected sound source according to sound signals of different intensities picked up by each microphone in the microphone array; and determining a position of each sound source according to a position of an intersection point of the first direction line and the second direction line for each detected sound source. Lui discloses when the user is at a first position, obtaining a first direction line for each detected sound source according to sound signals of different intensities picked up by each microphone in the microphone array “wherein the forward microphone of the present embodiment refers to the microphone in the current microphone array module and the sound source in the same side; the backward microphone refers to the microphone in the current microphone array module and the sound source in the opposite side. AI edge computing device (embedded GPU), used for according to the sound signal to obtain the sound signal of the relative sound source position and sound signal of the sound source type; wherein the relative sound source position of the embodiment is the geographical position of the sound source relative to the helmet body”); However, Nagar and Liu, do not specifically disclose, further, when the user is at a second position, obtaining a second direction line for each detected sound source according to sound signals of different intensities picked up by each microphone in the microphone array; and determining a position of each sound source according to a position of an intersection point of the first direction line and the second direction line for each detected sound source. Examiner takes Official Notice as to when the user is at a second position, obtaining a second direction line for each detected sound source according to sound signals of different intensities picked up by each microphone in the microphone array; and determining a position of each sound source according to a position of an intersection point of the first direction line and the second direction line for each detected sound source. Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to determine the position of each sound source by obtaining the intersection of a first sound and second sound direction lines, in the device of Nagar and Liu, since these method to obtain a more accurate position of sound sources by obtaining the intersection of two direction lines is customary in the art of sound detection to obtain more accurate information by using more than one iteration. As to claim 3, Nagar, further, does not specifically disclose establishing a world coordinate system with a head center of user as a coordinate origin, and determining a coordinate of each detected sound source in the world coordinate system; establishing a camera coordinate system with a pupil of the user as a coordinate origin, and converting the coordinate of each sound source in the world coordinate system into a coordinate in the camera coordinate system according to a conversion formula obtained from a camera calibration algorithm; and marking a coordinate of each sound source in the camera coordinate system on the lenses of the AR glasses. Liu discloses establishing a world coordinate system with a head center of user as a coordinate origin, and determining a coordinate of each detected sound source in the world coordinate system (“the actual sound source position is the geographical position of the sound source of the sound signal relative to the geodetic coordinate system”); establishing a camera coordinate system with a pupil of the user as a coordinate origin (“wherein the relative sound source position is the sound source of the sound signal relative to the geographic position of the helmet body”)(the helmet body is being used in the same manner and for the same calculation of the pupil); and converting the coordinate of each sound source in the world coordinate system into a coordinate in the camera coordinate system according to a conversion formula obtained from a camera calibration algorithm (“compared with the traditional sound source locating algorithm, the microphone sound source target locating method based on sound wave diffraction path of the solution through comprehensive utilization of microphone array information data of the helmet different side sound source”); and marking a coordinate of each sound source in the camera coordinate system on the lenses of the AR glasses (“in order to highlight the position distribution of the sound source, it is convenient for the operator to find the target, AR ocular lens is further provided with a marking module, used for according to the position distribution of the sound source, the AR eyepiece interface corresponding to the orientation for regional highlight marking. As shown in FIG. 6, the left side of the orientation image and black round for prompting the sound source of the substantially orientation, namely: the orientation corresponding to the black circle is the approximate orientation of the sound source; black triangle in the picture indicates the specific orientation of the sound source; the white triangle indicates the specific orientation of the helmet body; the orientation of the sound source and the orientation of the helmet body are indicated”). Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to mark the position of the sound source on the AR lens, as taught by Liu, in the device of Nagar, since (“it is convenient for the operator to judge the distance and orientation difference between itself and the sound source; adjusting the advancing direction in time”). As to claim 4, Nagar, further, discloses determining using an eye tracker to obtain the eye gazing direction of the user and converting the eye gazing direction into a second coordinate in the camera (Fig. 2, (212) coordinate system [0021, 0024, 0025]; when a coordinate distance between the eye gazing direction and a sound source of the distribution of sound sources in the camera coordinate system is less than a preset distance value [0024, 0025, 0049], locking onto the sound source as the target sound source [0049]. However, further, Nagar, does not specifically disclose distinctly marking the target sound source on the lenses of the AR glasses to lock onto the target sound source. Liu discloses distinctly marking the target sound source on the lenses of the AR glasses to lock onto the target sound source (“in order to highlight the position distribution of the sound source, it is convenient for the operator to find the target, AR ocular lens is further provided with a marking module, used for according to the position distribution of the sound source, the AR eyepiece interface corresponding to the orientation for regional highlight marking. As shown in FIG. 6, the left side of the orientation image and black round for prompting the sound source of the substantially orientation, namely: the orientation corresponding to the black circle is the approximate orientation of the sound source; black triangle in the picture indicates the specific orientation of the sound source; the white triangle indicates the specific orientation of the helmet body; the orientation of the sound source and the orientation of the helmet body are indicated”). Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to mark the position of the sound source on the AR lens, as taught by Liu, in the device of Nagar, since (“it is convenient for the operator to judge the distance and orientation difference between itself and the sound source; adjusting the advancing direction in time”). As to claims 5 and 8, Nagar, further, does not specifically disclose a voiceprint characteristic extracting unit configured for extracting voiceprint characteristics for each detected sound source separately, and associating the voiceprint characteristics with corresponding sound source positions to establish a voiceprint database. Liu discloses (in Figs. 3 and 4) a voiceprint characteristic extracting unit configured for extracting voiceprint characteristics for each detected sound source separately, and associating the voiceprint characteristics with corresponding sound source positions to establish a voiceprint database (“introducing bidirectional long-term memory network to realize voiceprint identification, using the complementary characteristic of the plurality of features of the sound signal, combining the time sequence characteristic of the LSTM, effectively making up the defect of the traditional all-connected neural network; using deep neural network to extract special features of the sound and fusion to form depth characteristic information; using interlayer fusion and tensor fusion to realize embedded GPU hardware acceleration of the voiceprint recognition algorithm at the network architecture layer”). Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to extract voiceprint characteristics of the detected sounds, as taught by Liu, in the device of Nagar, for (“improving the efficiency of the sound source target individual attribute recognition, reaching the requirement of voiceprint real-time identification”). As to claim 6, further, Nagar, discloses amplifying a gain of the extracted audio component, and/or reducing or turning off a gain of other unextracted audio components [0044, 0049]. However, Nagar, further, does not specifically disclose looking up the voiceprint database to obtain a voiceprint characteristic of the target sound source according to the first coordinate of the target sound source in the camera coordinate system ; extracting an audio component associated with the voiceprint characteristics of the target sound source from an audio signal currently received by the microphone array. Liu discloses, further, looking up the voiceprint database (Liu, claim 5) to obtain a voiceprint characteristic of the target sound source according to the first coordinate of the target sound source in the camera coordinate system (Liu, claim 6); extracting an audio component associated with the voiceprint characteristics of the target sound source from an audio signal currently received by the microphone array (Liu, claims 7 and 8). Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to extract voiceprint characteristics from a voiceprint database, as taught by Liu, in the device of Nagar, for (“for sound event reasoning, obtaining sound source type of the sound signal, comprising gun, cannon, bullet and other high-strength weapon.”). As to claim 9, Nagar, further, a microphone array (Fig. 2, (211), an eye tracker (Fig. 2, (212), an in-ear headphone (Fig. 2, (203)(bone conduction hearing device), a memory(Fig. 3, (28), and a processor(Fig. 3, (16), wherein the memory stores computer programs which are loaded and executed by the processor [0063, 0067, 0068] to implement the audio enhancing method for AR glasses[0024]. As to claim 10, Nagar, further, discloses a non-transitory computer readable storage medium (Fig. 3, (28) storing one or more computer programs [0063, 0067, 0068]configured to be executed by a processor (Fig. 3, (16), implement the audio enhancing method for AR glasses [0024]. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICARDO OSORIO whose telephone number is (571)272-7676. The examiner can normally be reached M-F 9 AM-5:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Boddie can be reached at 571-272-0666. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RICARDO OSORIO/Primary Examiner, Art Unit 2625
Read full office action

Prosecution Timeline

Mar 19, 2025
Application Filed
Mar 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585125
HEAD-MOUNTED DISPLAY, HEAD-MOUNTED DISPLAY LINKING SYSTEM, AND METHOD FOR SAME
2y 5m to grant Granted Mar 24, 2026
Patent 12578797
SWITCH ASSEMBLY WITH INTEGRATED HAPTIC EXCITER
2y 5m to grant Granted Mar 17, 2026
Patent 12579943
DISPLAY APPARATUS, DISPLAY MODULE, AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12562097
DISPLAY DEVICE AND METHOD OF DRIVING THE SAME
2y 5m to grant Granted Feb 24, 2026
Patent 12562085
HEAD-MOUNTED DISPLAY DEVICE AND METHOD FOR CONTROLLING THE SAME
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
97%
With Interview (+8.2%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 813 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month