Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Acknowledgement
Acknowledgement is made of applicant’s amendment made on 10/31/2025. Applicant’s submission filed has been entered and made of record.
Status of the Claims
Claims 1-29 are pending.
Response to Applicant’s Arguments
In response to “For another example, as described by Usher at paragraph [0079], "whenever 'John' is listening to music when out for a run, his friends can shout across the road 'hey john', which could automatically pause music playback, pass through ambient sound and allow John to speak with his name- calling friend”, “Usher only describes using the voice activity detector and other circuitry to analyze sounds from the user's ambient environment for the stored keywords and taking corresponding actions. Nowhere does Usher disclose or suggest "voice activity detection (VAD) circuitry configured to analyze one or more broadcast streams comprising audio data, to identify first segments of the one or more broadcast streams in which the audio data of the one or more broadcast streams includes speech data, and to identify second segments of the one or more broadcast streams in which the audio data does not include speech data" (emphasis added). Similarly, Usher does not disclose or suggest using derivation circuitry, keyword detection circuitry, or decision circuitry to analyze or act upon "broadcast streams comprising audio data," as recited by amended claim 1”, and “While Usher at paragraphs [0086] and [0104] disclose that the hearing device can receive broadcast streams, nowhere does Usher disclose or suggest that the hearing device analyzes these broadcast streams for speech data or takes corresponding actions depending on whether segments of these broadcast streams includes one or more words of a set of stored keywords. Usher is only concerned with analyzing ambient sounds (e.g., to facilitate the user being able to hear and understand ambient speech; see, Usher at paragraph [0079]). While the Office Action at pages 3-4 cites paragraphs [0024], [0059], [0060], [0077], [0079], and [0082] of Usher as allegedly disclosing analysis of "one or more broadcast streams," these paragraphs only disclose analysis of ambient sounds around the user (e.g., signals from ambient sound microphone (ASM) inputs). Furthermore, Applicant submits that Usher does not provide a reasonable rationale for applying such analysis to non-ambient sounds (e.g., broadcast streams comprising audio data)”.
According to the specification US 2024/0185881 A1 at ¶16: “Certain implementations described herein provide a device (e.g., hearing device) configured to receive wireless broadcasts (e.g., Bluetooth 5.2 broadcasts; location-based Bluetooth broadcasts) that stream many audio announcements, at least some of which are of interest to the user of the device”.
Further, Merriam-Webster defined broadcast as (1) to scatter or sow over a broad area, (2) to make widely known, or (3) to send out or transmit by means of radio or television or by streaming over the internet.1
Therefore, in Usher, for a Hearbud wearer called “John”, in the scenario of his friends shouting across the road “hey John” (¶79), that is a stream of broadcasts comprising many audio announcements “hey John” that were at least (1) scattered over a broad area and (2) made widely known across the road.
By teaching a voice activity detection circuit (¶58, voice activity detector to detect when someone is speaking with the HearBud wearer; i.e., processing multiple streams of “hey John” from friends across the road), derivation circuitry (¶86, perform speech to text translation for target keyword phrase “hey John” per ¶79), keyword detection circuitry (¶54, HearWare software with keyword detection for detecting target keyword phrase “hey John”), and decision circuitry (¶79, when detecting “hey john”, automatically pause music playback and pass through the ambient sound to allow John to speak with his name-calling friends) processing the friends’ shouting “hey John” across the road, Usher teaches every limitation of the claim.
In response to “Applicant submits that amended claim 14 includes features which are not disclosed or rendered obvious by Usher. For example, Usher does not disclose or suggest "dividing the one or more electromagnetic wireless broadcast streams into a plurality of segments comprising speech-including segments and speech-excluding segments" or "evaluating the audio data of each speech-including segment of the one or more electromagnetic wireless broadcast streams for inclusion of at least one keyword," as recited by amended claim 14”.
Usher teaches earphones 1820 communicates wirelessly with IOS / Android device 1940 via Radio Frequency or Bluetooth, the earphone comprises one or more ambient sound microphones (¶52):
PNG
media_image1.png
368
430
media_image1.png
Greyscale
Specifically, Fig. 24 shows a particular implementation of system 2400 comprising first user device 2402 / mobile device / smartphone (¶87) interacting with various applications executing within system 2400 such as speech to text translation etc. (¶86) and earphone device 2415 facilitating wireless transmissions between the earphone device 2415 and first user device 2402 (¶92).
PNG
media_image2.png
726
626
media_image2.png
Greyscale
In other words, when the earphone device comprising the ambient sound microphone collects a stream of broadcasts from friends across the road shouting “hey John” (¶79), the ambient sound microphone wirelessly transmits the stream of broadcast “hey John” to the user device / iOS / Android device via RF / electromagnetic waves to perform processing (e.g., voice activity detection for detecting speech including sections from speech excluding sections, speech to text derivation + keyword detection for evaluating speech including section for keyword, and decision to automatically pause music playback and pass through the ambient sound to allow John to speak with his name-calling friends).
Therefore, Usher teaches every limitation of claims 14 and 24.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
(a) NOVELTY; PRIOR ART.—A person shall be entitled to a patent unless—
(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention; or
(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
PNG
media_image3.png
18
19
media_image3.png
Greyscale
(b) EXCEPTIONS.—
(1) DISCLOSURES MADE 1 YEAR OR LESS BEFORE THE EFFECTIVE FILING DATE OF THE CLAIMED INVENTION.—A disclosure made 1 year or less before the effective filing date of a claimed invention shall not be prior art to the claimed invention under subsection (a)(1) if—
(A) the disclosure was made by the inventor or joint inventor or by another who obtained the subject matter disclosed directly or indirectly from the inventor or a joint inventor; or
(B) the subject matter disclosed had, before such disclosure, been publicly disclosed by the inventor or a joint inventor or another who obtained the subject matter disclosed directly or indirectly from the inventor or a joint inventor.
(2) DISCLOSURES APPEARING IN APPLICATIONS AND PATENTS.—A disclosure shall not be prior art to a claimed invention under subsection (a)(2) if—
(A) the subject matter disclosed was obtained directly or indirectly from the inventor or a joint inventor;
(B) the subject matter disclosed had, before such subject matter was effectively filed under subsection (a)(2), been publicly disclosed by the inventor or a joint inventor or another who obtained the subject matter disclosed directly or indirectly from the inventor or a joint inventor; or
(C) the subject matter disclosed and the claimed invention, not later than the effective filing date of the claimed invention, were owned by the same person or subject to an obligation of assignment to the same person.
Claims 1-9, 13-22, and 24-29 and are rejected under 35 USC 102(a)(1)-(a)(2) as being anticipated by Usher et al. (US 2019/0278556 A1).
Regarding Claim 1, Usher discloses an apparatus (Fig. 18 and ¶52, iOS, Android devices 1840; e.g., ¶¶87-88, and Fig. 24, user devices: mobile device smart phones) comprising:
voice activity detection (VAD) circuitry (¶52, a DSP chip for audio signal management; ¶58, an application of a Directional Enhancement algorithm is a front end to a voice activity detector to detect when someone is speaking; ¶59, running a Directional Enhancement algorithm on a DSP; see example, ¶87, first user device 2402 with processor 2404 executing instructions stored in memory 2403 to implement the DSP chip) configured to analyze one or more broadcast streams comprising audio data (¶59, the algorithm analyzes signal of ambient sound microphone (ASM) inputs; per ¶24, ambient sound microphones configured to detect sound around the listener; e.g., ¶60, car horn or someone shouting at the wearer), to identify first segments of the one or more broadcast streams in which the audio data of the one or more broadcast streams includes speech data (¶77 and ¶79, detect voice activity such as “hey john”), and to identify second segments of the one or more broadcast streams in which the audio data does not include speech data (¶82, perform horn detection to analyze ASM signal to detect a car horn);
derivation circuitry configured (¶86 and ¶104, implementing speech-to-text translation applications on user devices - mobile device smartphone; ¶87, first user device 2402 with processor 2404 executing instructions stored in memory 2403 to implement the speech to text translation applications) to receive the first segments of the one or more broadcast streams and, for each first segment of the one or more broadcast streams, to derive one or more words from the speech data of the first segment (per ¶73, toggle keyword detect button 2340 and monitor vocal audio for stored keywords by implementing speech to text translation applications);
keyword detection circuitry (¶54, Sound recognition system: keyword detection; ¶87, first user device 2402 with processor 2404 executing instructions stored in memory 2403 to implement the sound recognition system for keyword detection) configured to, for each first segment of the one or more broadcast streams, receive the one or more words and to generate keyword information indicative of whether at least one word of the one or more words is among a set of stored keywords (¶¶78-9, optimized algorithm for detecting when a wearer’s friends shout across the road targeted keyword “hey john”; in view of ¶73, monitor vocal audio for stored keywords; e.g., “John” from “hey john” is a stored keyword); and
decision circuitry (¶87, first user device 2402 with processor 2404 executing instructions stored in memory 2403) configured to receive the first segments of the one or more broadcast streams, the one or more words of each of the first segments of the one or more broadcast streams, and the keyword information for each of the first segments of the one or more broadcast streams and, for each first segment of the one or more broadcast streams, to select, based at least in part on the keyword information, among a plurality of options regarding communication of information indicative of the first segment to a recipient (¶79, when detecting target keyword phrase “hey john”, automatically pause music playback and pass through the ambient sound to allow John to speak with his name-calling friend).
Regarding Claim 2, Usher discloses wherein the VAD circuitry, the derivation circuitry, the keyword detection circuitry, and the decision circuitry are components of one or more microprocessors (¶87, first user device 2402 with processor 2404 executing instructions stored in memory 2403).
Regarding Claim 3, Usher discloses the apparatus of claim 2, further comprising an external device configured to be worn, held, and/or carried by the recipient (¶88, in addition to using first user device 2402, user 2401 also have access to second user device 2406), the external device comprising at least one microprocessor of the one or more microprocessors (¶88, second user device 2406 includes processor 2408 executing instructions from memory 2407).
Regarding Claim 4, Usher discloses the apparatus of claim 2, further comprising a sensory prosthesis configured to be worn by the recipient or implanted on and/or within the recipient's body (¶91, earphone device 2415 inserted into user’s ear), the sensory prosthesis comprising at least one microprocessor of the one or more microprocessors (¶92, earphone device 2415 includes processors that execute instructions from memory).
Regarding Claim 5, Usher discloses wherein the sensory prosthesis and the external device are in wireless communication with one another (¶92, earphone device 2415 comprises a transceiver to facilitate wireless connections and transmissions between earphone device 2415 and any device in system 2400; see Fig. 24, system 2400 comprises first user device 2402 and second user device 2406).
Regarding Claim 6, Usher discloses wherein the VAD circuitry is further configured to parse the first segments from the second segments (¶77, determine voice activity by an analysis of un-normalized cross correlation between ear canal microphone (ECM) and ambient sound microphone (ASMO) signals), to exclude the second segments from further processing (¶77, robust voice activity detection with significant low frequency energy on ear canal microphone post acoustic echo cancellation when the user is speaking to avoid false positives; see also ¶82, when a target sound is detected, signalGain is set to unity, and is zero otherwise), and to transmit the first segments to the derivation circuitry and the decision circuitry (¶78, keyword feature uses optimized sensory algorithm to detect when user utters a keyword phrase from ambient sound-delayed buffer per ¶77).
Regarding Claim 7, Usher discloses wherein the derivation circuitry is further configured to transmit the one or more words to the keyword detection circuitry (¶73, monitor vocal audio for stored keywords by implementing speech to text translation applications and detect keyword phrase like “hey john” per ¶79).
Regarding Claim 8, Usher discloses wherein the keyword detection circuitry is further configured to retrieve the set of stored keywords from memory circuitry (¶87, first user device 2402 with processor 2404 executing instructions stored in memory 2403 to implement stored keywords per ¶124: a database of stored spectrums associated with a keyword and phrase).
Regarding Claim 9, Usher discloses wherein the set of stored keywords comprises, for each stored keyword, information indicative of an importance of the stored keyword (¶125, matching identified keyword / keyphrase to an action listed on a database where a series of actions can be associated with the keyword / keyphrase).
Regarding Claim 13, Usher discloses wherein the plurality of options regarding communication of information indicative of the first segment to the recipient comprises at least one of:
at least one text message indicative of the one or more words of the first segment;
at least one visual, audio, and/or tactile signal indicative of whether the one or more words of the first segment comprises a stored keyword, indicative of an identification of the stored keyword, and/or indicative of an importance of the stored keyword (¶73, when the keyword is detected, enact an action associated with the keyword; e.g., activate Siri when the phrase “hello blue genie” is vocalized);
at least one signal indicative of the audio data of the first segment and communicated to the recipient (¶79, when ambient sound includes target keyword phrase “hey john”, pass through the ambient sound and allow John to speak with name-calling friend); and
at least one signal indicative of the audio data of the first segment and transmitted to memory circuitry to be stored and subsequently retrieved and communicated to the recipient.
Regarding Claims 14 and 24, Usher discloses a non-transitory computer readable storage medium having stored thereon a computer program (¶87, first user device 2402 with processor 2404 executing instructions stored in memory 2403 to implement the speech to text translation applications) that instructs a computer system (Fig. 18 and ¶52, iOS, Android devices 1840; e.g., ¶¶87-88, and Fig. 24, user devices: mobile device smart phones) to segment real-time audio information into distinct sections of information by at least:
receiving one or more electromagnetic wireless broadcast streams comprising audio information (¶92, earphone device 2415 comprises a transceiver to facilitate wireless connections and transmissions between earphone device 2415 and any device in system 2400; ¶42 and ¶52, earphone with ambient microphones monitoring sound from ambient environment; e.g., friends shouting across the road “hey john” as ambient sound / multiple streams of broadcasts and transmit said ambient sound to user device via electromagnetic / RF waves);
dividing / segmenting the one or more electromagnetic wireless broadcast streams into a plurality of sections / segments (¶77, determine voice activity by an analysis of un-normalized cross correlation between ear canal microphone (ECM) and ambient sound microphone (ASMO) signals) comprising speech-including sections / segments (¶77, robust voice activity detection with significant low frequency energy on ear canal microphone post acoustic echo cancellation when the user is speaking to avoid false positives; see also ¶82, when a target sound is detected, signalGain is set to unity) and speech-excluding sections / segments (¶82, when a target sound is detected, signalGain is set to unity, and is zero otherwise);
evaluating the audio information of each speech-including section / segment of the one or more electromagnetic wireless broadcast streams for inclusion of at least one keyword (¶¶78-9, optimized algorithm for detecting when a wearer’s friends shout across the road a targeted keyword “hey john” (i.e., multiple streams of broadcasts); in view of ¶73, monitor vocal audio for stored keywords; e.g., “John” from “hey john” is a stored keyword); and
based on said evaluating, communicating information regarding the speech-including section to a user (¶79, when detecting target keyword phrase “hey john”, automatically pause music playback and pass through the ambient sound to allow John to speak with his name-calling friend).
Regarding Claim 15, Usher discloses wherein said receiving is performed by a personal electronic device worn, held, and/or carried by the user or implanted on or within the user's body (¶91, earphone device 2415 inserted into user’s ear).
Regarding Claim 16, Usher discloses wherein the one or more electromagnetic wireless broadcast streams comprises at least one Bluetooth broadcast stream (¶52 and Fig. 18, earphones connected to BB 1810, which communicates with Android / iOS device via Bluetooth).
Regarding Claims 17 and 25, Usher discloses wherein said dividing comprises:
detecting at least one characteristic for each segment of the plurality of sections / segments (¶77, determine voice activity by an analysis of un-normalized cross correlation between ear canal microphone (ECM) and ambient sound microphone (ASMO) signals);
determining, for each segment / section of the plurality of segments / sections, whether the at least one characteristic is indicative of either the section / segment being a speech-including section / segment or a speech-excluding section / segment (¶77, robust voice activity detection with significant low frequency energy on ear canal microphone post acoustic echo cancellation when the user is speaking to avoid false positives; see also ¶82, when a target sound is detected, signalGain is set to unity, and is zero otherwise); and
appending information to at least some of the sections / segments, the information indicative of whether the section / segment is a speech-including section / segment or a speech-excluding section / segment (¶78, keyword feature uses optimized sensory algorithm to detect when user utters a keyword phrase from ambient sound-delayed buffer per ¶77; e.g., ¶125, matching the detected keyword or keyphrase to an action listed on a database).
Regarding Claims 18 and 25, Usher discloses wherein said dividing further comprises excluding the speech-excluding sections / segments from further processing (¶82, when a target sound is detected, signalGain is set to unity, and is zero otherwise).
Regarding Claims 19 and 26, Usher discloses wherein said evaluating the audio information comprises:
extracting one or more words from the audio data of the speech-including section / segment (¶73, monitor vocal audio for stored keywords by implementing speech to text translation applications per ¶86);
comparing the one or more words to a set of keywords to detect the at least one keyword within the one or more words (¶124, analyzing measured vocalization for detecting a keyword / keyphrase by matching a spectrum of ambient sound microphone to a database of stored spectrums associated with a keyword and/or phrase); and
appending information to at least some of the speech-including sections / segments, the information indicative of existence and/or identity of the detected at least one keyword within the one or more words of the speech-including section / segment (¶125, matching the detected keyword or keyphrase to an action listed on a database).
Regarding Claims 20 and 27, Usher discloses wherein the set of keywords is compiled from at least one of: user input, time of day, user's geographic location when the speech-including segment is received, history of previous user input, and/or information from computer memory (¶73, monitor vocal audio for stored keywords corresponding to a database of stored spectrums associated with a keyword and/or phrase) or one or more computing applications (¶73, “hello blue genie” activates Siri).
Regarding Claims 21 and 26, Usher discloses wherein said evaluating further comprises assigning an importance level to the speech-including section / segment (¶73, e.g., associate an action with detected keyword; e.g., activate Siri when “hello blue genie” is vocalized).
Regarding Claims 22 and 26, Usher discloses wherein the importance level is based at least in part on existence and/or identity of the at least one keyword, user input, time of day, user's geographic location when the speech-including section / segment is received, history of previous user input, and/or information from computer memory (¶73, monitor vocal audio for stored keywords corresponding to a database of stored spectrums associated with a keyword and/or phrase) or one or more computing applications (¶73, “hello blue genie” activates Siri).
Regarding Claim 28, Usher discloses based on whether the one or more words includes at least one keyword, the identity of the included at least one keyword, and/or the importance level of the speech-including section, selecting whether to communicate the information regarding the speech-including section to the user (¶79, when detecting target keyword phrase “hey john”, automatically pause music playback and pass through the ambient sound to allow John to speak with his name-calling friend) or to not communicate the information regarding the speech-including section to the user (¶73, when the keyword is detected, an action associated with the keyword is enacted; e.g., when “hello blue genie” is vocalized, activate Siri).
Regarding Claim 29, Usher discloses wherein communicating the information comprises at least one of:
displaying at least one text message to the user, the at least one text message indicative of the one or more words of the speech-including section;
providing at least one visual, audio, and/or tactile signal to the user, the at least one visual, audio, and/or tactile signal indicative of whether the speech-including section comprises a keyword, an identification of the keyword, and/or an importance of the keyword (¶73, when the keyword is detected, enact an action associated with the keyword; e.g., activate Siri when the phrase “hello blue genie” is vocalized);
providing at least one signal indicative of the audio information of the speech-including section to the user (¶79, when ambient sound includes target keyword phrase “hey john”, pass through the ambient sound and allow John to speak with name-calling friend); and
storing at least one signal indicative of the audio information of the speech-including section in memory circuitry, and subsequently retrieving the stored at least one signal from the memory circuitry and providing the stored at least one signal to the user.
Claim Rejections - 35 USC § 103
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 103 that form the basis for the rejections under this section made in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 10-12 are rejected under 35 USC 103(a) as being unpatentable over Usher et al. (US 2019/0278556 A1) in view of Guo et al. (US 12028683 B2).
Regarding Claims 10-12, Usher does not disclose keyword generation circuitry configured to generate at least some keywords of the set of stored keywords.
Guo discloses a system comprising ear-worn / in-ear headphones communicating with mobile phones (Col 7, Rows 36-42) configured to detect keywords in external sounds acquired by a reference microphone to playback all or parts of the external sound (Col 7, Rows 46-51).
The system comprising keyword generation circuitry (Fig. 8, library establishment module 88 comprising an input module, a deletion module, or an adjustment module; see Col 22, Rows 55-62) configured to generate at least some keywords of a set of stored keywords (Col 20, Rows 5-10 and Col 22, Rows 63-67, user may enter keyword as a sample sound for a sample sound library), wherein the keyword generation circuitry is configured to receive input information from at least one keyword source and/or at least one importance source (Col 20, Rows 19-24, user may adjust priority of sample sound by setting weights according scenario per Col 22, Rows 66-67), wherein the input information from at least one keyword source and/or the at least one importance source comprise information provided by the recipient (Col 20, Rows 5-10 and Col 22, Rows 63-67, user may enter keyword as a sample sound for a sample sound library and adjust priority of sample sound).
It would’ve been obvious to one ordinarily skilled in the art before the effective filing date of the invention to implement a keyword generation circuit to generate keywords of the set of stored keywords in order to establish a library (Guo, Col 3, Rows 37-44) of keywords / sample sounds such that when the system detects data that does not contain keywords / sample sounds in the library, a user can avoid hearing uninterested sounds (Guo, Col 8, Rows 35-40).
Claim 23 is rejected under 35 USC 103(a) as being unpatentable over Usher et al. (US 2019/0278556 A1) in view of Lasky (US 10791404 B1).
Regarding Claim 23, Usher discloses wherein said communicating information is selected from the group consisting of:
displaying at least one text message to the user, the at least one text message indicative of the one or more words of the speech-including segment;
providing at least one visual, audio, and/or tactile signal to the user, the at least one visual, audio, and/or tactile signal indicative of whether the speech-including segment comprises a keyword, an identification of the keyword, and/or an importance of the keyword ((¶73, when the keyword is detected, enact an action associated with the keyword; e.g., activate Siri when the phrase “hello blue genie” is vocalized));
providing at least one signal indicative of the audio data of the speech-including segment to the use (¶79, when ambient sound includes target keyword phrase “hey john”, pass through the ambient sound and allow John to speak with name-calling friend)r; and
storing at least one signal indicative of the audio data of the speech-including segment in memory circuitry, and subsequently retrieving the stored at least one signal from the memory circuitry and providing the stored at least one signal to the user.
Usher does not disclose displaying at least one text message to the user, the at least one text message indicative of the one or more words of the speech-including segment and storing at least one signal indicative of the audio data of the speech-including segment in memory circuitry, and subsequently retrieving the stored at least one signal from the memory circuitry and providing the stored at least one signal to the user.
Lasky teaches a system comprising an assisted hearing device (Col 5, Rows 50-55) converting speech including segments into one or more words and displaying at least one text message comprising the one or more words to the user (Col 6, Rows 1-7, a processor performs text to speech to convert voice sounds to discrete words corresponding to words spoken by a speaker for output; i.e., Col 7, Rows 12-16, display to a user in text format) and storing at least one signal indicative of the audio data of the speech-including segment in memory circuitry (Col 6, Rows 8-13 and Rows 52-60, often occur common words prerecorded from speakers are stored in a data table / database), and subsequently retrieving the stored at least one signal from the memory circuitry and providing the stored at least one signal to the user (Col 6, Rows 30-37 and Col 7, Rows 4-7, when actual spoken words clarity level is unclear, using the database to generate substitute for speaker’s voice by matching discrete words to a database code to retrieve appropriate synthesized words as replacement for unclear spoken words for output).
It would’ve been obvious to one ordinarily skilled in the art before the effective filing date of the invention to convert speech including segments into text messages comprising one or more words for display to the user or storing at least one signal indicative of the audio data of the speech-including segment in memory circuitry for subsequent retrieval in order to provide or display substitutes when speech including segments clarity level is unclear (Lasky, Col 6, Rows 30-37).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to examiner Richard Z. Zhu whose telephone number is 571-270-1587 or examiner’s supervisor Hai Phan whose telephone number is 571-272-6338. Examiner Richard Zhu can normally be reached on M-Th, 0730:1700.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RICHARD Z ZHU/Primary Examiner, Art Unit 2654 01/31/2026
1 https://www.merriam-webster.com/dictionary/broadcast