Prosecution Insights
Last updated: April 19, 2026
Application No. 18/163,436

SMART GLASS INTERFACE FOR IMPAIRED USERS OR USERS WITH DISABILITIES

Non-Final OA §103
Filed
Feb 02, 2023
Examiner
HUBER, PAUL W
Art Unit
2691
Tech Center
2600 — Communications
Assignee
Meta Platforms Technologies, LLC
OA Round
3 (Non-Final)
85%
Grant Probability
Favorable
3-4
OA Rounds
2y 1m
To Grant
95%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
929 granted / 1091 resolved
+23.2% vs TC avg
Moderate +10% lift
Without
With
+9.5%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
36 currently pending
Career history
1127
Total Applications
across all art units

Statute-Specific Performance

§101
3.5%
-36.5% vs TC avg
§103
44.1%
+4.1% vs TC avg
§102
23.3%
-16.7% vs TC avg
§112
9.0%
-31.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1091 resolved cases

Office Action

§103
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 5, 7, 9, 11, 12, 15, 17, 18, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Olwal et al. (US 2023/0122450) considered with Tomczek (US 11,727,774). Regarding claims 1 and 11, Olwal discloses a system comprising smart glasses 100/300 (see figs. 2-4, for example) and a computer-implemented method. The system and method, comprising: smart glasses comprising a camera 110, 111 (e.g., FOV camera 110 and depth camera 111; see para. 0028), an inertial measurement unit (IMU) 302, and a microphone 131-134; and a processor configured to: obtain image data collected by the camera 110, 111; obtain motion data collected by the IMU 302 (see para. 0034, regarding “the positioning data from the IMU 302 and the images from the camera(s) 310 may be used in a simultaneous localization and mapping (SLAM) process configured to track the position of the AR glasses 300 relative to a global environment. For example, the SLAM process can identify feature points in images captured by the camera(s) 310. The feature points can be combined with the IMU data to estimate a pose (i.e., position/orientation) of the AR glasses 300 relative to a global environment”); obtain sound data collected by the microphone 131-134, wherein the sound data is indicative of a user environment (see para. 0036, regarding “the microphone array 330 may be configured to determine directions of sounds from an environment… Data from the microphone array may be used to help determine the direction and/or position of a device or person in the global environment”); identify the user environment based on the image data, the motion data, and the sound data (see para. 0034-0036, regarding that data from the image data, the motion data, and the sound data can be used to determine the direction and/or position of a device or person in the global environment relative to a position/orientation of the AR glasses 300; see also, para. 0040, regarding “sound recognition process 412 that can determine what type of sound has been detected”; see also, para. 0044, regarding “perceptual recognition 443 can be configured to recognize visual attributes of real objects in the global environment… For example, the perceptual recognition 443 may include recognizing a face… [or] may recognize a barking sound with a captured image of a dog”); automatically determine, based on identifying the user environment, whether a user-awareness condition is present (e.g., presence of a moving truck, barking dog, person speaking, etc.; see para. 0019, regarding “the enhanced messages may have the technical effect of conveying location and identity information corresponding to sound sources to a user without requiring effort on the part of the user (i.e., automatically)”); and responsive to determining that the user-awareness condition is present, provide an environmental context for the sound data (e.g., distance and/or moving direction of an object making sound such as a barking dog, and/or identification of an object making sound such as a person speaking), wherein the environmental context comprises a situational awareness cue with respect to changes in sound (see para. 0050, regarding “an alert (i.e., attention!) with description of the recognized sound (i.e., going backwards) and an identifier describing the sound source (i.e., truck). The text of the fifth source message 520 is sized according to the distance of the truck 519”). Owal discloses the invention as claimed, but fails to specifically teach that the processor is further configured to automatically determine whether the user-awareness condition (e.g., barking dog) satisfies a user-importance threshold, and responsive to determining that the user-awareness condition satisfies the user-importance threshold, provide the environmental context for the sound data as claimed (e.g., identification of barking dog and/or distance, location, moving direction of barking dog). Tomczek discloses smart glasses 12 (see figs. 1-2, and col. 4, lines 1-20) including sensors, and further includes a processor configured to determine whether a user-awareness condition is present (e.g., barking dog; see col. 8, lines 41-43) based on the sensors (see fig. 4, step 400), and further determine whether the user-awareness condition satisfies a user-importance threshold (see fig. 4, step 402, and col. 9, lines 30-56, regarding “determine whether an intrusion has been detected as coming with the interaction space, and/or whether a boundary has been encroached inward by another object… [and/or] whether the user [or object] has not actually crossed a boundary yet but is within a threshold distance of doing so”), and in response to determining that the user-awareness condition is present (step 400) and satisfies the user-important threshold (step 402), provide an environment context for the sound data, wherein the environmental context comprises a situational awareness cue (see col. 10, lines 43-46, regarding “at block 410 the logic may also increase the volume level and/or frequency of presentation of the 3D audio responsive to identification of an increased risk of collision of the intrusion with the user…”). Tomczek discloses providing the environmental context, which includes the situational awareness cue, responsive to determining that the user-awareness condition is present (step 400) and satisfies the user-importance threshold (step 402), in the same field of endeavor, for the purpose of only providing the environmental context for identified objects which are deemed to also satisfy the user-importance threshold (i.e., are too close to be ignored by the user). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to modify Owal, in view of Tomczek, such that the processor is further configured to automatically determine whether the user-awareness condition (e.g., barking dog) satisfies a user-importance threshold (e.g., is within a threshold distance of the user), and responsive to determining that the user-awareness condition satisfies the user-importance threshold, provide the environmental context for the sound data as claimed (e.g., identification of barking dog and/or distance, location, moving direction of barking dog). A practitioner in the art would have been motivated to do this for the purpose of only providing the environmental context for identified objects which are deemed to also satisfy the user-importance threshold (i.e., are too close to be ignored by the user). Regarding claim 5, the smart glasses further comprise first and second eyepieces 104, 105, and at least one of the first and second eyepieces 104, 105 comprises a display 115 (see para. 0026) configured to provide the environmental context to the user as a readable text. See para. 0050, regarding “the fifth source message 520 includes an alert (i.e., attention!) with description of the recognized sound (i.e., going backwards) and an identifier describing the sound source (i.e., truck). The text of the fifth source message 520 is sized according to the distance of the truck 519”. Regarding claim 7, the microphone 131-134 comprises an array configured to capture a stereo sound, and the processor is further configured to provide an alert about a direction of a sound source based on the stereo sound. See fig. 3, and para. 0032. Regarding claim 9, the smart glasses further comprise first and second eyepieces 104, 105. The microphone 131-134 comprises an array configured to capture a stereo sound. The processor is further configured to identify a direction of a source associated with a waveform in the stereo sound (see fig. 3, and para. 0032). At least one of the first and second eyepieces 104, 105 comprise a display 115 configured to label the source associated with the waveform (see para. 0050, regarding “the fifth source message 520 includes an alert (i.e., attention!) with description of the recognized sound (i.e., going backwards) and an identifier describing the sound source (i.e., truck). The text of the fifth source message 520 is sized according to the distance of the truck 519”). Regarding claim 12, identifying the user environment comprises determining a textual description of the image data. See fig. 6 and para. 0050, regarding “the fifth source message 520 includes an alert (i.e., attention!) with description of the recognized sound (i.e., going backwards) and an identifier describing the sound source (i.e., truck). The text of the fifth source message 520 is sized according to the distance of the truck 519”. Regarding claim 15, prior to identifying the user environment, a direction of a selected sound source is identified by synchronizing a time delay between audio signals for a waveform associated with the selected sound source, wherein the sound data comprises the audio signals, and the microphone 131-134 comprises a microphone array configured to collect the audio signals (see para. 0032, regarding “times of arrivals of the sounds at the microphones in the microphone array may help determine that the second sound source 212 is located along a second direction 222 defined by a second angle 232 with the AR glasses”), and enhancing an audio signal from the selected sound source (see para. 0033, regarding “the AR glasses may further include a left speaker 141 and a right speaker 142 configured to transmit audio (e.g., beamformed audio) to the user”). Regarding claim 17, the audio data comprises a human voice. Identifying the user environment comprises identifying the human voice, and providing the environmental context for the sound data comprises providing a name of a person associated with the human voice (see para. 0046, regarding “the source identification 460 may compare a face or a voice print to a database 461 of familiar faces and/or familiar voiceprints to identify a person (e.g., by name)”; see also, fig. 6, regarding the display of person’s name, e.g., “Alice”). Regarding claim 18, the sensor signal comprises multiple voices for multiple persons. See fig. 6, wherein multiple voices include Alice and Charlie, for example. Providing the environmental context for the sound data comprises adding a caption with a name for each person (see fig. 6, which displays the names of “Alice” and “Charlie”, for example). Regarding claim 22, providing the environmental context for the sound data comprises providing an indication of an intended mood for the user environment. For example, Olwal teaches providing “an alert (i.e., attention!) with description of the recognized sound…” (para. 0050), and Tomczek teaches providing an alert “Warning!” with description of the recognized sound (fig. 3), which are alerts which convey an intended mood (e.g., mood of seriousness) for the user environment as claimed. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Olwal et al. (US 2023/0122450) and Tomczek (US 11,727,774), as applied to claim 1 above, in further view of Wexler et al. (US 2022/0172736). Olwal, as modified and applied to claim 1 above, discloses the invention as claimed, including that the smart glasses further comprise a communications module 340 configured to communicate with an external device 192 of the user (see fig. 4), but fails to specifically teach that the external device 192 of the user is a wearable device (e.g., watch) that provides an environmental data to the processor 350. Wexler discloses smart glasses 110 that provides to a user an environmental context (e.g., selected speech of certain speakers) from a sensor signal indicative of a user environment from at least one sensor, wherein the smart glasses 110 includes a communication module configured to communicate with a wearable device 120 of the user (e.g., watch), wherein the wearable device provides an environmental data to a processor of the smart glasses 110, in the same field of endeavor, for the purpose of using the external device 120 worn by the user to determine environmental data thereby improving the performance of the smart glasses 110 (see para. 0052, 0065). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to further modify Olwal, in view of Wexler, such that the external device 192 of the user is a wearable device (e.g., watch) that provides an environmental data to the processor 350. A practitioner in the art would have been motivated to do this for the purpose of using the external device worn by the user to determine environmental data thereby improving the performance of the smart glasses 100/300. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Olwal et al. (US 2023/0122450) and Tomczek (US 11,727,774), as applied to claim 1 above, in further view of Wexler et al. (US 2022/0172736). Olwal, as modified and applied to claim 1 above, discloses the invention as claimed, including that the smart glasses further comprise a communications module 340 configured to communicate with a mobile device 192 (see fig. 4, and para. 0037), but fails to specifically teach that the communications module 340 is configured to communicate the image data and the sound data to the mobile device 192, and that the mobile device 192 is configured to display the environmental context on a screen. Wexler discloses smart glasses 110 that provides to a user an environmental context (e.g., selected speech of certain speakers) from a sensor signal indicative of a user environment from at least one sensor (e.g., microphone and camera), wherein the smart glasses 110 includes a communication module configured to communicate with a mobile device 120, wherein the communications module is configured to communicate sound data provided by the microphone and image data captured by the camera to the mobile device 120, and the mobile device 120 is configured to display the environmental context on a screen, in the same field of endeavor, so that the “user 100 may view on display 260 data (e.g., images, video clips, extracted information, feedback information, etc.) that originate from or are triggered by apparatus 110” (para. 0063). See also, Wexler, para. 0052, 0065. It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to further modify Olwal, in view of Wexler, such that the communications module 340 is configured to communicate the image data and the sound data to the mobile device 192, and that the mobile device 192 is configured to display the environmental context on a screen. A practitioner in the art would have been motivated to do this for the purpose of allowing the user to also view the environment context data (e.g., name of person speaking) on a display of the mobile device 192 of the user thereby improving the overall system for better enjoyment by the user. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Olwal et al. (US 2023/0122450) and Tomczek (US 11,727,774), as applied to claim 1 above, in further view of Wexler et al. (US 2022/0172736). Olwal, as modified and applied to claim 1 above, discloses the invention as claimed, including that the smart glasses further comprise a communications module 340 configured to communicate with a network server 191 (see fig. 4), but fails to specifically teach that the communication module 340 is configured to communicate the image data and the sound data to the network server 191, and to receive from the network server 191 the environmental context. Wexler discloses smart glasses 110 that provides to a user an environmental context (e.g., selected speech of certain speakers) from a sensor signal indicative of a user environment from at least one sensor (e.g., microphone and camera), wherein the smart glasses 110 includes a communication module configured to communicate the sensor signal to a network server, wherein the network server provides the environmental context to the smart glasses 110, in the same field of endeavor, for the purpose of using the network server to determine the environmental context from the sensor signal thereby improving the performance of the smart glasses 110 (see para. 0052, 0064-0065). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to further modify Olwal, in view of Wexler, such that the communication module 340 is configured to communicate the image data and the sound data to the network server 191, and to receive from the network server 191 the environmental context. A practitioner in the art would have been motivated to do this for the purpose of using the network server 191 to determine the environmental context from the sensor signal thereby improving the performance of the smart glasses 100/300. Claims 6, 10, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Olwal et al. (US 2023/0122450) and Tomczek (US 11,727,774), as applied to claim 1 and claim 11 above, in further view of Jerauld (US 9,140,554). Olwal, as modified and applied to claim 1 and claim 11 above, discloses the invention as claimed, including that the smart glasses further comprise a speaker 141, 142, and that the environmental context is provided to the user as a visual description (e.g., textual message), but fails to specifically teach either: that the speaker 142, 143 is configured to provide the environmental context to the user as an audio description; that the processor is further configured to obtain a textual description of the image data and to cause the speaker 142, 143 to read the textual description of the image data to the user; or providing the environmental context for the sound data, via the speaker 142, 143, as a spoken description of the image data collected by the camera 110, 111. Jerauld discloses smart glasses 200 including a speaker 216 that provides to a user an environmental context (e.g., identification of an object, name of person who is speaking) from a sensor signal indicative of a user environment from at least one sensor (e.g., microphone and camera), wherein the speaker 216 is configured to provide the environmental context to the user as an audio description (see col. 6, lines 46-49, regarding “vocalizing ‘Ottoman’ in a soft voice via speaker 216”), wherein a textual description of the image data is obtained and the speaker 216 is caused to read the textual description of the image data to the user (e.g., identifying imaged object as an ottoman and reading the textual description of the imaged object, i.e., ‘Ottoman’, to the user), and wherein a spoken description of the image data from the camera (e.g., ‘Ottoman’) is provided via the speaker 216. See also, Jerauld, col. 7, lines 51-58, regarding “the navigation module 16 may use one or more of the depth image data 38 and visible image data 46 to recognize the face of the person 438 sitting on the couch 434. The navigation module may then associate the person’s face with an identity of the person, using for example a facial recognition database stored in a remote server. The navigation module may then [audibly] inform the user 14 of the identity of the person 438”. Jerauld discloses such an audible communication method to the user, in the same field of endeavor, for the purpose of audibly communicating the environmental context to the user so the user can be made audibly aware of an environmental condition detected by the smart glasses 200. It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to further modify Olwal, in view of Jerauld, such that either: the speaker 142, 143 is configured to provide the environmental context to the user as an audio description; the processor is further configured to obtain a textual description of the image data and to cause the speaker 142, 143 to read the textual description of the image data to the user; or the environmental context for the sound data is provided, via the speaker 142, 143, as a spoken description of the image data collected by the camera 110, 111. A practitioner in the art would have been motivated to do this for the purpose of audibly communicating the environmental context to the user so the user can also be made audibly aware of an environmental condition detected by the smart glasses 200. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Olwal et al. (US 2023/0122450) and Tomczek (US 11,727,774), as applied to claim 1 above, in further view of Grieves et al. (US 2022/0366874). Olwal, as modified and applied to claim 1 above, discloses the invention as claimed, including that the microphone 131-134 comprises an array configured to capture a stereo sound, but fails to specifically teach that the processor is further configured to convert the stereo sound to mono-audio output from the speaker for a user that has diminished hearing in one ear. Grieves discloses an electronic device which allows a user to switch from stereo sound to mono-audio sound, in the same field of endeavor, when the user has diminished hearing in one ear (see para. 0088). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to further modify Olwal, in view of Grieves, such that the processor converts the stereo sound to mono-audio output from the speaker for a user that has diminished hearing in one ear. A practitioner in the art would have been motivated to do this for the purpose of allowing a user to switch to mono-audio sound when the user has diminished hearing in one ear. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Olwal et al. (US 2023/0122450) and Tomczek (US 11,727,774), as applied to claim 11 above, in further view of Wexler et al. (US 2022/0172736). Olwal, as modified and applied to claim 11 above, discloses the invention as claimed, but fails to specifically teach that the sound data comprises a background sound, and that providing the environmental context for the sound data comprises removing the background sound from the sound data and after removing the background sound, providing the sound data to the user via a speaker. Wexler discloses a computer-implemented method (see para. 0052, 0059), comprising: collecting, from a headset 110 (e.g., smart glasses) or wearable device 120 with a user, a sensor signal indicative of a user environment; identifying the user environment based on a signal attribute (e.g., “one or more detected audio characteristics of sounds associated with a voice of individual 2010”; see para. 0162); and communicating, to the user, a context from the user environment, in the headset 110 (see fig. 28, for example). See also, Wexler, para. 0161, regarding “processor 210 may use various techniques to recognize the voice of individual 2010, … The recognized voice pattern and the detected facial features may be used, either alone or in combination, to determine that individual 2010 is recognized by apparatus 110.” See also, Wexler, para. 0216-0217, regarding “when in a gathering such as a party, the user may be interested in hearing a person with whom he is currently speaking. In this case, it may be helpful to reduce and or eliminate other audio such as speech by other speakers, music, and/or background noise.” Wexler discloses communicating the context for the user environment by removing the background sound from another sound provided to the user via a speaker, in the same field of endeavor, for the purpose of audibly providing environmental context to the user (e.g., a particular person who is speaking which the user wishes to hear) which is free of background noise. It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to further modify Olwal, in view of Wexler, such that the sound data comprises a background sound, and that providing the environmental context for the sound data comprises removing the background sound from the sound data and after removing the background sound, providing the sound data to the user via a speaker. A practitioner in the art would have been motivated to do this for the purpose of audibly providing environmental context to the user (e.g., a particular person who is speaking which the user wishes to hear) which is free of background noise. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Olwal et al. (US 2023/0122450) and Tomczek (US 11,727,774), as applied to claim 11 above, in further view of Wexler et al. (US 2022/0172736). Olwal, as modified and applied to claim 11 above, discloses the invention as claimed, but fails to specifically teach that the audio data comprises a broadband spectral sound, that identifying the user environment is further based on a spectral profile of the broadband spectral sound, and that providing the environmental context for the sound data comprises converting the spectral profile into a narrow band spectral sound that can be heard by the user. Wexler discloses a computer-implemented method (see para. 0052, 0059), comprising: collecting, from a headset 110 (e.g., smart glasses) or wearable device 120 with a user, a sensor signal indicative of a user environment; identifying the user environment based on a signal attribute (e.g., “one or more detected audio characteristics of sounds associated with a voice of individual 2010”; see para. 0162); and communicating, to the user, a context from the user environment, in the headset 110 (see fig. 28, for example). See also, Wexler, para. 0202, regarding “user 100 may have lesser sensitivity to tones in a certain range and conditioning of the audio signals may adjust the pitch of sound 2421. For example, user 100 may experience hearing loss in frequencies above 10 kHz and processor 210 may remap higher frequencies (e.g., at 15 kHz) to 10 KHz.” Wexler discloses converting the spectral profile into a narrow band spectral sound that can be heard by the user, in the same field of endeavor, for the purpose of communicating to the user the context for the user environment with a narrow band spectral sound that can be heard by a user with hearing loss. It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to further modify Olwal, in view of Wexler, such that the audio data comprises a broadband spectral sound, that identifying the user environment is further based on a spectral profile of the broadband spectral sound, and that providing the environmental context for the sound data comprises converting the spectral profile into a narrow band spectral sound that can be heard by the user. A practitioner in the art would have been motivated to do this for the purpose of audibly providing environmental context to the user (e.g., a particular person who is speaking which the user wishes to hear) which is converted into a narrow band spectral sound that can be heard by a user with hearing loss. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Olwal et al. (US 2023/0122450) and Tomczek (US 11,727,774), as applied to claim 1 above, in further view of Norris et al. (US 2024/0146847). Olwal, as modified and applied to claim 1 above, discloses the invention as claimed, but fails to specifically teach that providing the environmental context for the sound data comprises providing an indication of an ambient volume level of the user environment. Norris discloses smart glasses worn by a user (see para. 0037), the smart glasses including a processor and a sensor for producing sound data, the processor configured to identify a user environment based on the sound data, and providing an environmental context for the sound data which includes providing an indication of an ambient volume level of the user environment, in the same field of endeavor, for the purpose of alerting the user to the presence of a loud ambient sound (see para. 0079, regarding “a bright red visual alert can correspond to a loud PE sound”). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to further modify Olwal, in view of Norris, such that providing the environmental context for the sound data comprises providing an indication of an ambient volume level of the user environment. For example, see Olwal, fig. 6, wherein the message 521 “Bark!” can include a bright red visual alert to indicate to the user that the barking sound of the dog is relatively loud, as taught by Norris. A practitioner in the art would have been motivated to do this for the purpose of alerting the user to the presence of a ambient sound which is very loud and needs the immediate attention of the user. Applicant’s arguments with respect to the claims have been considered but are moot because of the new ground of rejection necessitated by amendment. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL W HUBER whose telephone number is (571)272-7588. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen, can be reached at telephone number 571-272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center to authorized users only. Should you have questions about access to the USPTO patent electronic filing system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via a variety of formats. See MPEP § 713.01. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/InterviewPractice. /PAUL W HUBER/Primary Examiner, Art Unit 2691 pwh February 2, 2026
Read full office action

Prosecution Timeline

Feb 02, 2023
Application Filed
Apr 04, 2025
Non-Final Rejection — §103
Jul 09, 2025
Response Filed
Jul 09, 2025
Examiner Interview Summary
Jul 09, 2025
Applicant Interview (Telephonic)
Oct 17, 2025
Final Rejection — §103
Jan 21, 2026
Request for Continued Examination
Jan 28, 2026
Response after Non-Final Action
Feb 02, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604150
Method and System For Spatial Audio Processing Using Multiple Orders Of Ambisonics
2y 5m to grant Granted Apr 14, 2026
Patent 12593189
METHOD OF GENERATING VIBRATION FEEDBACK SIGNAL, ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12593159
MAGNETIC EARPHONES HOLDER
2y 5m to grant Granted Mar 31, 2026
Patent 12587803
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12587804
LOCATION-AWARE NEURAL AUDIO PROCESSING IN CONTENT GENERATION SYSTEMS AND APPLICATIONS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
85%
Grant Probability
95%
With Interview (+9.5%)
2y 1m
Median Time to Grant
High
PTA Risk
Based on 1091 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month