Prosecution Insights
Last updated: April 19, 2026
Application No. 18/757,035

DATA COMMUNICATION OVER INAUDIBLE SIGNALS

Non-Final OA §102§103
Filed
Jun 27, 2024
Examiner
GILES, EBONI N
Art Unit
2622
Tech Center
2600 — Communications
Assignee
Inthrall Global Corporation
OA Round
1 (Non-Final)
63%
Grant Probability
Moderate
1-2
OA Rounds
3y 7m
To Grant
72%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
440 granted / 697 resolved
+1.1% vs TC avg
Moderate +9% lift
Without
With
+8.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
33 currently pending
Career history
730
Total Applications
across all art units

Statute-Specific Performance

§101
2.0%
-38.0% vs TC avg
§103
78.5%
+38.5% vs TC avg
§102
9.1%
-30.9% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 697 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This office action is in response to the application filed 6/27/2024 in which Claims 1-20 are pending. Information Disclosure Statement The information disclosure statement (IDS) submitted on 9/26/2024 was filed after the mailing date of the application on 6/27/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claims 15-20 recite the limitations "user input module", "encoding module", “modulation module”, "audio track generation module", “analysis module”, “decoding module” as in claims 15 and 20, and " embedding module" in claim 16, and "data profile definition module” in clam 17, “data profile utilization module” in claim 18, “modulation module” in claim 19, and also “generation module”, “modulation module”, “transmitting device sending module”, “detection module”, “decoding module”, “receiving device sending module” have been interpreted under 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph), because they use a non-structural term "module" coupled with functional language, e.g., “convert”, "modulate", "generate", "detect and decode", “embed”, “define”, “retrieve”, “select”, “transmit”, “send”, without reciting sufficient structure to achieve the function. Furthermore, the non-structural term is not preceded by a structural modifier. The word "module" is not recognized as the name of a structure for acquiring, and it is simply a substitute for "means for" coupled with functional language. Since this claim limitation invokes 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph), claims 15-20 are interpreted to cover the corresponding structure (or material or acts) described in the specification paragraphs [0190], [0191], [0193] and Figures 9-11 that achieves the claimed function, and equivalents thereof. If applicant wishes to provide further explanation or dispute the examiner's interpretation of the corresponding structure, applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 2, 5, 7, 10-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. Patent Publication 2014/0029768 to Hong et al (“Hong”). As to Claim 1, Hong teaches a method for transmitting data, comprising: a. receiving user input as one or more message blocks (messages 122 may contain a process for execution by a user device, such as user device 160. The messages may be received from an outside source, such as a media content provider, for use with an audio portion of media content. The messages may also be input into the encoding device, for example, using an input device such as a keyboard or mouse, see ¶ 0045); b. converting the one or more message blocks into binary data resulting in one or more converted message blocks (Encoding of the audio pattern from messages 122 may be done using an audio encoding technology. The audio encoding technology may utilize a coding scheme where text and/or data is translated into binary values, which may then be encoded. For example, letters and digits may be mapped to a binary value. Below is a table showing text and digits mapped to binary values between 0 and 44, which may be encoded by 6 bits, where C is the character, and V is the binary value, see ¶ 0047); c. modulating each of the one or more converted message blocks onto one or more carrier frequencies producing one or more modulated messages (The binary values may be transformed into the audio pattern by utilizing a sound wave, such as an audio sine wave. The sound wave may be high frequency, such as an inaudible 19 kHz sine sound wave, in order to make the sound wave normally inaudible to a user. The high frequency sound wave may correspond to a binary value, for example 1, while silence corresponds to 0, see ¶ 0052), wherein the one or more carrier frequencies are within one or more predefined frequency bands, wherein each carrier frequency having a predetermined spacing from one or more adjacent carrier frequencies (a "triple chunk" pattern may be utilized, wherein, when assigning 1 to an audio signal group, a sample of a sine wave with a known length, and a specific inaudible frequency, such as 19 kHz [carrier frequencies are within one or more predefined frequency bands], may be placed between two samples of silence of the same known length to form the group [each carrier frequency having a predetermined spacing from one or more adjacent carrier frequencies]. Additionally, in such an encoding scheme, 0 may also correspond to three samples of silence using the known length. Each sample is designated as a "chunk." Thus, in such an audio pattern, a binary triple of 3 bits is used to indicate the audio pattern of a bit , where 010, corresponding to silence/audio sine wave/silence, is used to indicate 1 and 000, corresponding to silence/silence/silence, is used to indicate 0, see ¶ 0052), wherein the one or more predefined frequency bands are in a range inaudible to humans (The binary values may be transformed into the audio pattern by utilizing a sound wave, such as an audio sine wave. The sound wave may be high frequency, such as an inaudible 19 kHz sine sound wave, in order to make the sound wave normally inaudible to a user, see ¶ 0052); d. generating an audio track incorporating the one or more modulated messages (The binary values may be transformed into the audio pattern by utilizing a sound wave, such as an audio sine wave. The sound wave may be high frequency, such as an inaudible 19 kHz sine sound wave, in order to make the sound wave normally inaudible to a user. The high frequency sound wave may correspond to a binary value, for example 1, while silence corresponds to 0. However, in other embodiments, a different pattern may be utilized. For example, a "triple chunk" pattern may be utilized, wherein, when assigning 1 to an audio signal group, a sample of a sine wave with a known length, and a specific inaudible frequency, such as 19 kHz, may be placed between two samples of silence of the same known length to form the group, see ¶ 0052; Once the message has been encoded into an audio pattern [modulated message], the encoding device may contain audio content for transmission with the audio pattern. At step 206, the audio pattern encoded at step 204 is further mixed with audio content, such as audio content 126 in database 120. The audio content may correspond to an audio track of an audiovisual content, such as a television broadcast, or may correspond to a separate audio content, such as a radio broadcast, see ¶ 0054); and e. transmitting the audio track (Once the message has been encoded into an audio pattern, the encoding device may contain audio content for transmission with the audio pattern. At step 206, the audio pattern encoded at step 204 is further mixed with audio content, such as audio content 126 in database 120. The audio content may correspond to an audio track of an audiovisual content, such as a television broadcast, or may correspond to a separate audio content, such as a radio broadcast, see ¶ 0054; Thus, the encoding device has an audio communication containing at least an audio pattern corresponding to a transmittable message. Finally, at step 208, the encoding device transmits the mixed audio content to a user device, see ¶ 0055). As to Claim 2, Hong depending on Claim 1, Hong teaches creating one or more data profiles based on the one or more modulated messages, wherein each of the one or more data profiles includes information about the number of frequencies used for encoding the one or more modulated messages, the predetermined spacing between the carrier frequencies, and the duration of each one of the one or more modulated messages (The binary values may be transformed into the audio pattern by utilizing a sound wave, such as an audio sine wave. The sound wave may be high frequency, such as an inaudible 19 kHz sine sound wave, in order to make the sound wave normally inaudible to a user. The high frequency sound wave may correspond to a binary value, for example 1, while silence corresponds to 0. However, in other embodiments, a different pattern may be utilized. For example, a "triple chunk" pattern may be utilized, wherein, when assigning 1 to an audio signal group, a sample of a sine wave with a known length, and a specific inaudible frequency, such as 19 kHz, may be placed between two samples of silence of the same known length to form the group [predetermined spacing between carrier frequencies]. Additionally, in such an encoding scheme, 0 may also correspond to three samples of silence using the known length. Each sample is designated as a "chunk." Thus, in such an audio pattern, a binary triple of 3 bits is used to indicate the audio pattern [duration of each modulated message] of a bit, where 010, corresponding to silence/audio sine wave/silence, is used to indicate 1 and 000, corresponding to silence/silence/silence, is used to indicate 0, see ¶ 0052; As an audio pattern frequency of 19 kHz is at the upper limit of perceivable sounds by an ordinary person across all age ranges, a 19 kHz wave may be generally used. However, lower frequency waves may be used where the population is known to be of a different demographic. In generally, a sound wave of 15 kHz to 20 kHz may be used and be made inaudible [number of frequencies used for encoding] to the ordinary person depending on the target demographic, see ¶ 0021). As to Claim 5, Hong depending on Claim 1, Hong teaches a. adaptively selecting polarity of one or more carrier frequencies based on environmental noise conditions (When utilizing a "triple chunk" pattern, the sound wave may be made further inaudible by adjusting the amplitude of the sound wave corresponding to the 19 kHz sine wave. For example, the sound wave amplitude at 19 kHz may be linearly increased from 0 to 1 over the first half of the sample, and linearly decreased from 1 to 0 over the second half of the wave sample [adaptively selecting polarity of one or more carrier frequencies]. For silent samples, a wave of any frequency with zero amplitude may be utilized. Utilizing a sound wave of this specific design, it is possible to robustly detect and decode the resulting audio pattern, see ¶ 0053; At step 206, the audio pattern encoded at step 204 is further mixed with audio content, such as audio content 126 in database 120. The audio content may correspond to an audio track of an audiovisual content, such as a television broadcast, or may correspond to a separate audio content, such as a radio broadcast. The audio pattern may be mixed with the audio content in such a way so that the audio pattern is inaudible to an audience consuming the audio content, but is perceptible to a user device, such as user device 160. Using the above described "triple chunk" method with a 19 kHz wave of varying amplitude, the audio pattern may be both inaudible and unique to noises from other sound sources, see ¶ 0054); Hong teaches b. dynamically allocating one or more bits of the one or more converted message blocks to the one or more carrier frequencies based on respective signal-to-noise ratios (The high frequency sound wave may correspond to a binary value, for example 1, while silence corresponds to 0. However, in other embodiments, a different pattern may be utilized. For example, a "triple chunk" pattern may be utilized, wherein, when assigning 1 to an audio signal group, a sample of a sine wave with a known length, and a specific inaudible frequency, such as 19 kHz, may be placed between two samples of silence of the same known length to form the group. Additionally, in such an encoding scheme, 0 may also correspond to three samples of silence using the known length. Each sample is designated as a "chunk." Thus, in such an audio pattern, a binary triple of 3 bits is used to indicate the audio pattern of a bit, where 010, corresponding to silence/audio sine wave/silence, is used to indicate 1 and 000, corresponding to silence/silence/silence, is used to indicate 0, see ¶ 0052; When utilizing a "triple chunk" pattern, the sound wave may be made further inaudible by adjusting the amplitude of the sound wave corresponding to the 19 kHz sine wave. For example, the sound wave amplitude at 19 kHz may be linearly increased from 0 to 1 over the first half of the sample, and linearly decreased from 1 to 0 over the second half of the wave sample. For silent samples, a wave of any frequency with zero amplitude may be utilized. Utilizing a sound wave of this specific design, it is possible to robustly detect and decode the resulting audio pattern, see ¶ 0053; audio patterns may include sound samples and silent samples. In one embodiment described above, the audio pattern to designate each bit utilizes a "triple chunk" pattern that includes sets of 3 samples of known length, wherein the middle sample corresponds to either an increasing/decreasing amplitude sine wave sample, or a silent sample. Thus, the difference between binary 1 and 0 is the appearance of a sine wave of 19 kHz, see ¶ 0060). As to Claim 7, Hong depending on Claim 1, Hong teaches further comprising: a. accessing a library of pre-existing inaudible signals (Encoding/decoding application 112 may correspond to a software program executable by a hardware processor that is configured to encode messages 122 in database 120. Encoding/decoding application 112 may include processes for encoding messages 122 into audio patterns, see ¶ 0028); and b. selecting one or more of the pre-existing inaudible signals from the library for modulating the one or more converted message blocks based on predefined criteria (Encoding/decoding application 112 may correspond to a software program executable by a hardware processor that is configured to encode messages 122 in database 120. Encoding/decoding application 112 may include processes for encoding messages 122 into audio patterns. The audio pattern may correspond to a sound wave pattern that is decodable by an application with knowledge of the encoding/decoding scheme. The audio pattern created by encoding/decoding application 112 and messages 122 may be configured so the sound wave is normally inaudible to a user, see ¶ 0028). As to Claim 10, Hong teaches a method for receiving data, comprising: a. capturing audio data (encoding/decoding application 162 may passively monitor audio communications 140 to determine audio patterns and provide a viewable list of received and determined audio patterns, see ¶ 0057; the user device determines the audio pattern from the mixed audio content. The user device may separate the audio pattern from the mixed audio content, for example, by using encoding/decoding application 162 to determine audio patterns, see ¶ 0058); b. storing the captured audio data in a circular array data structure (When audio pattern data is detected, it may be copied to a buffer [circular array data structure] of the user device for analysis, see ¶ 0058); c. analyzing the circular array data structure for one or more predefined frequency bands encompassing a plurality of carrier frequencies with each carrier frequency having a predetermined spacing from one or more adjacent carrier frequencies, wherein the one or more predefined frequency bands are in a range inaudible to humans (audio patterns may include sound samples and silent samples. In one embodiment described above, the audio pattern to designate each bit utilizes a "triple chunk" pattern that includes sets of 3 samples of known length, wherein the middle sample corresponds to either an increasing/decreasing amplitude sine wave sample, or a silent sample [predetermined spacing from adjacent carrier frequencies]. Thus, the difference between binary 1 and 0 is the appearance of a sine wave of 19 kHz. When audio data is streamed to a buffer, data chunks of 3 samples using the known length may be extracted for analysis, see ¶ 0060); d. identifying a structured message format having a start block, one or more message blocks, and an end block, wherein the one or more message blocks contain one or more encoded messages comprising user input and are encoded in the plurality of carrier frequencies, and wherein each of the start block and the end block is composed of unique sequences of the plurality of carrier frequencies (In addition to encoding a message into binary values, additional code may be added to the start of the binary stream for transmission to the user device. For example, when converting the text or data to a stream of binary values, a start notification or "flag" may be inserted at the beginning of the stream to designate that point as the stream start. The start notification may be designated as a field of specific bit length and content. In one embodiment, the start notification may include an 8 bit field of all bit 1s, see ¶ 0049; a length field may be added to indicate the length of the data in the stream. The length field may include a field next to the start notification with a binary value. In one embodiment, the length field may include a 6 bit binary value indicating the length of textual information contained in the message [message block] when using the above mapped characters, see ¶ 0050); e. detecting the one or more encoded messages by recognizing the unique sequences of the carrier frequencies of the start block and the end block (In order to avoid errors in the stream, a longitudinal redundancy check (LRC) sum may be implemented and inserted to the end of the binary stream. Under the LRC sum system, the binary stream may be divided into longitudinal groups of specific bits amounts and a single bit parity code may be computed from every longitudinal group. In one embodiment, the longitudinal parity group contains 8 bits, and the single bit parity code is computed from the number of times bit 1 occurs in the group. If the number of times 1 occurs is odd, then the parity code is set to 1, otherwise it is set to 0, see ¶ 0051); f. converting each of the one or more detected encoded messages from frequency domain data into binary data by: applying a Fourier Transform to each one of the one or more detected i. encoded messages, ii. analyzing frequency components through detection of variations in one or more of the amplitude, phase, or frequency of each of the plurality of carrier frequencies of the one or more detected encoded messages (the audio pattern to designate each bit utilizes a "triple chunk" pattern that includes sets of 3 samples of known length, wherein the middle sample corresponds to either an increasing/decreasing amplitude sine wave sample, or a silent sample. Thus, the difference between binary 1 and 0 is the appearance of a sine wave of 19 kHz. When audio data is streamed to a buffer, data chunks of 3 samples using the known length may be extracted for analysis. For every data chunk, fast Fourier transform (FFT) may be applied on the audio samples and the amplitude energy around the 19 kHz frequency may be computed [analyzing frequency components through variations in amplitude], see ¶ 0060), iii. mapping each identified frequency component to a corresponding binary value, and iv. decrypting the binary data to retrieve the user input (the Goertzal algorithm may be used to detect an encoded audio pattern. The Goertzal algorithm provides for evaluation of a Discrete Fourier Transform using a small number of selected frequencies efficiently. For example, if the specific frequency is set at 19 kHz, the Goertzal algorithm may be utilized to analyze an incoming audio signal and detect the start of an audio pattern. After detecting the start of an audio pattern, an FFT may be utilized for decoding the audio pattern, see ¶ 0062; an audio pattern chunk corresponding to 010 may correspond to bit 1 and an audio pattern chunk corresponding to 000 may correspond to bit 0… Accordingly, when decoding the audio pattern, the binary triples corresponding to 000, 100, 001, and 101 may be mapped to bit 0, while 010, 110, 011, and 111 may be mapped to bit 1, see ¶ 0063); and g. displaying the user input to a user (a device containing a decoding feature may be activated. Upon receiving the audio pattern, the device may determine and decode the audio pattern to receive the underlying message. Thus, the device may then display the message and execute any embedded processes, see ¶ 0015). As to Claim 11, Hong depending on Claim 10, Hong teaches further comprising obtaining one or more predefined data profiles for the captured audio data, wherein each of the one or more predefined data profiles includes one or more information about number of frequencies used for encoding the one or more message blocks, the predetermined spacing between the carrier frequencies, or the duration of each one of the one or more message blocks (The binary values may be transformed into the audio pattern by utilizing a sound wave, such as an audio sine wave. The sound wave may be high frequency, such as an inaudible 19 kHz sine sound wave, in order to make the sound wave normally inaudible to a user. The high frequency sound wave may correspond to a binary value, for example 1, while silence corresponds to 0. However, in other embodiments, a different pattern may be utilized. For example, a "triple chunk" pattern may be utilized, wherein, when assigning 1 to an audio signal group, a sample of a sine wave with a known length, and a specific inaudible frequency, such as 19 kHz, may be placed between two samples of silence of the same known length to form the group [predetermined spacing between carrier frequencies]. Additionally, in such an encoding scheme, 0 may also correspond to three samples of silence using the known length. Each sample is designated as a "chunk." Thus, in such an audio pattern, a binary triple of 3 bits is used to indicate the audio pattern [duration of each modulated message] of a bit, where 010, corresponding to silence/audio sine wave/silence, is used to indicate 1 and 000, corresponding to silence/silence/silence, is used to indicate 0, see ¶ 0052; As an audio pattern frequency of 19 kHz is at the upper limit of perceivable sounds by an ordinary person across all age ranges, a 19 kHz wave may be generally used. However, lower frequency waves may be used where the population is known to be of a different demographic. In generally, a sound wave of 15 kHz to 20 kHz may be used and be made inaudible [number of frequencies used for encoding] to the ordinary person depending on the target demographic, see ¶ 0021; audio patterns may include sound samples and silent samples. In one embodiment described above, the audio pattern to designate each bit utilizes a "triple chunk" pattern that includes sets of 3 samples of known length, wherein the middle sample corresponds to either an increasing/decreasing amplitude sine wave sample, or a silent sample [predetermined spacing from adjacent carrier frequencies]. Thus, the difference between binary 1 and 0 is the appearance of a sine wave of 19 kHz. When audio data is streamed to a buffer, data chunks of 3 samples using the known length may be extracted for analysis, see ¶ 0060; the Goertzal algorithm may be used to detect an encoded audio pattern. The Goertzal algorithm provides for evaluation of a Discrete Fourier Transform using a small number of selected frequencies efficiently. For example, if the specific frequency is set at 19 kHz, the Goertzal algorithm may be utilized to analyze an incoming audio signal and detect the start of an audio pattern. After detecting the start of an audio pattern, an FFT may be utilized for decoding the audio pattern, see ¶ 0062). As to Claim 12, Hong depending on Claim 11, Hong teaches wherein the detection of the one or more encoded messages involves comparing extracted features from the captured audio data with the one or more predefined data profiles (the Goertzal algorithm may be used to detect an encoded audio pattern. The Goertzal algorithm provides for evaluation of a Discrete Fourier Transform using a small number of selected frequencies efficiently. For example, if the specific frequency is set at 19 kHz, the Goertzal algorithm may be utilized to analyze an incoming audio signal and detect the start of an audio pattern. After detecting the start of an audio pattern, an FFT may be utilized for decoding the audio pattern, see ¶ 0062; an audio pattern chunk corresponding to 010 may correspond to bit 1 and an audio pattern chunk corresponding to 000 may correspond to bit 0. However, noise in the transmit channel and issues with synchronization may cause a transmitted data chunk to not align with a received data chunk. For example, the k=1024 samples may spread to two consecutive samples, creating 110, 011, or other data chunks. Accordingly, when decoding the audio pattern, the binary triples corresponding to 000, 100, 001, and 101 may be mapped to bit 0, while 010, 110, 011, and 111 may be mapped to bit 1, see ¶ 0063). As to Claim 13, Hong depending on Claim 10, Hong teaches detecting and decoding multiple encoded messages simultaneously within the captured audio data (At step 312, the user device determines the audio pattern from the mixed audio content. The user device may separate the audio pattern from the mixed audio content, for example, by using encoding/decoding application 162 to determine audio patterns. When audio pattern data is detected, it may be copied to a buffer of the user device for analysis, see ¶ 0058), wherein the method comprises: a. obtaining separate predefined data profiles for each encoded message of the one or more encoded messages (encoding/decoding application 162 may passively monitor audio communications 140 to determine audio patterns and provide a viewable list of received and determined audio patterns and/or decoded messages to the user, see ¶ 0057; the user device determines the audio pattern from the mixed audio content. The user device may separate the audio pattern from the mixed audio content, for example, by using encoding/decoding application 162 to determine audio patterns. When audio pattern data is detected, it may be copied to a buffer of the user device for analysis, see ¶ 0058); b. utilizing a multi-channel Fourier Transform to analyze multiple predefined frequency bands (When audio data is streamed to a buffer, data chunks of 3 samples using the known length may be extracted for analysis. For every data chunk, fast Fourier transform (FFT) may be applied on the audio samples and the amplitude energy around the 19 kHz frequency may be computed, see ¶ 0060); c. applying a signal separation algorithm to isolate overlapping frequency components (the Goertzal algorithm may be used to detect an encoded audio pattern. The Goertzal algorithm provides for evaluation of a Discrete Fourier Transform using a small number of selected frequencies efficiently. For example, if the specific frequency is set at 19 kHz, the Goertzal algorithm may be utilized to analyze an incoming audio signal and detect the start of an audio pattern. After detecting the start of an audio pattern, an FFT may be utilized for decoding the audio pattern, see ¶ 0062); d. mapping each isolated frequency component to its respective binary value based on the predefined data profiles (an audio pattern chunk corresponding to 010 may correspond to bit 1 and an audio pattern chunk corresponding to 000 may correspond to bit 0. However, noise in the transmit channel and issues with synchronization may cause a transmitted data chunk to not align with a received data chunk. For example, the k=1024 samples may spread to two consecutive samples, creating 110, 011, or other data chunks. Accordingly, when decoding the audio pattern, the binary triples corresponding to 000, 100, 001, and 101 may be mapped to bit 0, while 010, 110, 011, and 111 may be mapped to bit 1, see ¶ 0063); and e. reconstructing the decrypted user inputs into separate messages (Once audio patterns are detected, the encoding/decoding application may wait for a start notification. The start notification may correspond to an identifiable code section that instructs the encoding/decoding application it is the start of a message. In one embodiment, the start notification may include an 8 bit group of all bit 1s, see ¶ 0066; After recognizing the start notification, the encoding/decoding application on the user device may further detect a length code. As described above, the length code may indicate the length of data in the binary stream. Once the start notification and a length code are determined, the encoding/decoding application may begin receiving the subsequent audio pattern. The encoding/decoding application may receive the binary stream of the audio pattern until the number of characters or data amount specified in the length code is received, see ¶ 0067). As to Claim 14, Hong depending on Claim 10, Hong teaches further comprising using the decrypted user input to control one or more functions of a receiving device, wherein the method comprises: a. interpreting the decrypted user input as having one or more control commands; b. executing the control commands on the receiving device (Once a message has been received and decoded from the audio pattern, the user device may display the message at step 316. User device 160 may utilize display 184 to display messages. In addition to displaying the message, the audio pattern may further contain processes for execution by the user device in certain embodiments. Thus, the user device may further execute a process associated with the message such as initiating a data retrieval process on a user device, navigating to a website using a user device web browser, and transmitting a second audio pattern corresponding to a user action, see ¶ 0069). As to Claim 15, Hong teaches a system for transmitting and receiving data over inaudible signals, comprising: a. a transmitting device having a processor (user device 160 may correspond to an interactive device for audio transmission and/or reception, such as a personal computer or system of networked computers, PDA, mobile cellular phone, tablet computer, or other device. Although a user device is shown, the user device may be managed or controlled by any suitable processing device, see ¶ 0036), a memory (Database 170 may correspond to a data collection stored in a memory of user device 160 and containing messages 172 and user responses 174, see ¶ 0041), a user input module configured to receive user input as one or more message blocks (messages 122 may contain a process for execution by a user device, such as user device 160. The messages may be received from an outside source, such as a media content provider, for use with an audio portion of media content. The messages may also be input into the encoding device, for example, using an input device such as a keyboard or mouse, see ¶ 0045), an encoding module configured to convert the message blocks into binary data and digital signals (the message is encoded into an audio pattern by the encoding device. An encoding process or application, for example, encoding/decoding application 112, may perform the encoding. The encoding/decoding application may perform the encoding using messages 122, see ¶ 0046; Encoding of the audio pattern from messages 122 may be done using an audio encoding technology. The audio encoding technology may utilize a coding scheme where text and/or data is translated into binary values, which may then be encoded. For example, letters and digits may be mapped to a binary value. Below is a table showing text and digits mapped to binary values between 0 and 44, which may be encoded by 6 bits, where C is the character, and V is the binary value, see ¶ 0047); a modulation module configured to modulate a plurality of carrier frequencies with the digital signals, where the carrier frequencies are inaudible to humans (The binary values may be transformed into the audio pattern by utilizing a sound wave, such as an audio sine wave. The sound wave may be high frequency, such as an inaudible 19 kHz sine sound wave, in order to make the sound wave normally inaudible to a user. The high frequency sound wave may correspond to a binary value, for example 1, while silence corresponds to 0. However, in other embodiments, a different pattern may be utilized. For example, a "triple chunk" pattern may be utilized, wherein, when assigning 1 to an audio signal group, a sample of a sine wave with a known length, and a specific inaudible frequency, such as 19 kHz, may be placed between two samples of silence of the same known length to form the group. Additionally, in such an encoding scheme, 0 may also correspond to three samples of silence using the known length. Each sample is designated as a "chunk." Thus, in such an audio pattern, a binary triple of 3 bits is used to indicate the audio pattern of a bit, where 010, corresponding to silence/audio sine wave/silence, is used to indicate 1 and 000, corresponding to silence/silence/silence, is used to indicate 0, see ¶ 0052), an audio track generation module configured to generate an audio track incorporating the modulated signals; and a transmission module configured to transmit the audio track through an audio emitting component (Once the message has been encoded into an audio pattern, the encoding device may contain audio content for transmission with the audio pattern. At step 206, the audio pattern encoded at step 204 is further mixed with audio content, such as audio content 126 in database 120. The audio content may correspond to an audio track of an audiovisual content, such as a television broadcast, or may correspond to a separate audio content, such as a radio broadcast. The audio pattern may be mixed with the audio content in such a way so that the audio pattern is inaudible to an audience consuming the audio content, but is perceptible to a user device, such as user device 160, see ¶ 0054); and b. a receiving device having a processor (user device 160 may correspond to an interactive device for audio transmission and/or reception, such as a personal computer or system of networked computers, PDA, mobile cellular phone, tablet computer, or other device. Although a user device is shown, the user device may be managed or controlled by any suitable processing device, see ¶ 0036), a memory an audio capturing module configured to capture audio data, an analysis module configured to detect and decode the inaudible signals within the captured audio data (encoding/decoding application 162 may passively monitor audio communications 140 to determine audio patterns and provide a viewable list of received and determined audio patterns, see ¶ 0057; the user device determines the audio pattern from the mixed audio content. The user device may separate the audio pattern from the mixed audio content, for example, by using encoding/decoding application 162 to determine audio patterns, see ¶ 0058); a decoding module configured to convert the decoded signals into user-readable data (encoding/decoding application 162 may passively monitor audio communications 140 to determine audio patterns and provide a viewable list of received and determined audio patterns and/or decoded messages to the user, see ¶ 0057; Once the audio pattern has been determined, the audio pattern may be decoded using a decoding scheme corresponding to the encoding scheme used to create the audio pattern. At step 314, the audio pattern is decoded by the user device. Encoding/decoding application 162 may include a decoding scheme and process in addition to the audio pattern recognition process. The decoding scheme may be utilized with the decoding process to determine a text and/or data message from the audio pattern, see ¶ 0059) and a display module configured to display the decoded data to a user (a device containing a decoding feature may be activated. Upon receiving the audio pattern, the device may determine and decode the audio pattern to receive the underlying message. Thus, the device may then display the message and execute any embedded processes, see ¶ 0015). As to Claim 16, Hong depending on Claim 15, Hong teaches wherein the transmitting device further comprises an embedding module configured to embed the audio track with the modulated signals into existing audio content (The binary values may be transformed into the audio pattern by utilizing a sound wave, such as an audio sine wave. The sound wave may be high frequency, such as an inaudible 19 kHz sine sound wave, in order to make the sound wave normally inaudible to a user. The high frequency sound wave may correspond to a binary value, for example 1, while silence corresponds to 0. However, in other embodiments, a different pattern may be utilized. For example, a "triple chunk" pattern may be utilized, wherein, when assigning 1 to an audio signal group, a sample of a sine wave with a known length, and a specific inaudible frequency, such as 19 kHz, may be placed between two samples of silence of the same known length to form the group, see ¶ 0052; Once the message has been encoded into an audio pattern, the encoding device may contain audio content for transmission with the audio pattern. At step 206, the audio pattern encoded at step 204 is further mixed with audio content, such as audio content 126 in database 120. The audio content may correspond to an audio track of an audiovisual content, such as a television broadcast, or may correspond to a separate audio content, such as a radio broadcast. The audio pattern may be mixed with the audio content in such a way so that the audio pattern is inaudible to an audience consuming the audio content, but is perceptible to a user device, such as user device 160, see ¶ 0054). As to Claim 17, Hong depending on Claim 15, Hong teaches wherein the transmitting device further comprises a data profile definition module configured to define a data profile for the encoded message blocks, including information about frequency bands, the carrier frequencies, and modulation parameters (The binary values may be transformed into the audio pattern by utilizing a sound wave, such as an audio sine wave. The sound wave may be high frequency, such as an inaudible 19 kHz sine sound wave, in order to make the sound wave normally inaudible to a user. The high frequency sound wave may correspond to a binary value, for example 1, while silence corresponds to 0. However, in other embodiments, a different pattern may be utilized. For example, a "triple chunk" pattern may be utilized, wherein, when assigning 1 to an audio signal group, a sample of a sine wave with a known length, and a specific inaudible frequency, such as 19 kHz, may be placed between two samples of silence of the same known length to form the group. Additionally, in such an encoding scheme, 0 may also correspond to three samples of silence using the known length. Each sample is designated as a "chunk." Thus, in such an audio pattern, a binary triple of 3 bits is used to indicate the audio pattern of a bit, where 010, corresponding to silence/audio sine wave/silence, is used to indicate 1 and 000, corresponding to silence/silence/silence, is used to indicate 0, see ¶ 0052). As to Claim 18, Hong depending on Claim 17, Hong teaches wherein the receiving device further comprises a data profile utilization module configured to retrieve the data profile defined by the transmitting device and use it to detect and decode the inaudible signals within the captured audio data (The binary values may be transformed into the audio pattern by utilizing a sound wave, such as an audio sine wave. The sound wave may be high frequency, such as an inaudible 19 kHz sine sound wave, in order to make the sound wave normally inaudible to a user. The high frequency sound wave may correspond to a binary value, for example 1, while silence corresponds to 0. However, in other embodiments, a different pattern may be utilized. For example, a "triple chunk" pattern may be utilized, wherein, when assigning 1 to an audio signal group, a sample of a sine wave with a known length, and a specific inaudible frequency, such as 19 kHz, may be placed between two samples of silence of the same known length to form the group [predetermined spacing between carrier frequencies]. Additionally, in such an encoding scheme, 0 may also correspond to three samples of silence using the known length. Each sample is designated as a "chunk." Thus, in such an audio pattern, a binary triple of 3 bits is used to indicate the audio pattern [duration of each modulated message] of a bit, where 010, corresponding to silence/audio sine wave/silence, is used to indicate 1 and 000, corresponding to silence/silence/silence, is used to indicate 0, see ¶ 0052; As an audio pattern frequency of 19 kHz is at the upper limit of perceivable sounds by an ordinary person across all age ranges, a 19 kHz wave may be generally used. However, lower frequency waves may be used where the population is known to be of a different demographic. In generally, a sound wave of 15 kHz to 20 kHz may be used and be made inaudible [number of frequencies used for encoding] to the ordinary person depending on the target demographic, see ¶ 0021; audio patterns may include sound samples and silent samples. In one embodiment described above, the audio pattern to designate each bit utilizes a "triple chunk" pattern that includes sets of 3 samples of known length, wherein the middle sample corresponds to either an increasing/decreasing amplitude sine wave sample, or a silent sample. Thus, the difference between binary 1 and 0 is the appearance of a sine wave of 19 kHz. When audio data is streamed to a buffer, data chunks of 3 samples using the known length may be extracted for analysis, see ¶ 0060; the Goertzal algorithm may be used to detect an encoded audio pattern. The Goertzal algorithm provides for evaluation of a Discrete Fourier Transform using a small number of selected frequencies efficiently. For example, if the specific frequency is set at 19 kHz, the Goertzal algorithm may be utilized to analyze an incoming audio signal and detect the start of an audio pattern. After detecting the start of an audio pattern, an FFT may be utilized for decoding the audio pattern, see ¶ 0062). As to Claim 19, Hong depending on Claim 15, Hong teaches wherein the transmitting device further comprises a library of predefined inaudible signals, and the modulation module is configured to select and use these predefined signals for encoding the message blocks (Encoding/decoding application 112 may correspond to a software program executable by a hardware processor that is configured to encode messages 122 in database 120. Encoding/decoding application 112 may include processes for encoding messages 122 into audio patterns. The audio pattern may correspond to a sound wave pattern that is decodable by an application with knowledge of the encoding/decoding scheme. The audio pattern created by encoding/decoding application 112 and messages 122 may be configured so the sound wave is normally inaudible to a user, see ¶ 0028). As to Claim 20, Hong depending on Claim 15, Hong teaches further comprising a remote server having: a. a processor; b. a memory (Exemplary devices and servers may include, for example, devices, stand-alone, and enterprise-class servers, operating an OS such as a MICROSOFT.RTM. OS, a UNIX.RTM. OS, a LINUX.RTM. OS, or other suitable device and/or server based OS. It can be appreciated that the devices and/or servers illustrated in FIG. 1 may be deployed in other ways and that the operations performed and/or the services provided by such devices and/or servers may be combined or separated for a given embodiment and may be performed by a greater number or fewer number of devices and/or servers, see ¶ 0017); c. a transmitting device receiving module to receive the user input from the transmitting device (messages 122 may contain a process for execution by a user device, such as user device 160. The messages may be received from an outside source, such as a media content provider, for use with an audio portion of media content. The messages may also be input into the encoding device, for example, using an input device such as a keyboard or mouse, see ¶ 0045), an encoding module to perform the encoding of the message blocks into binary data and digital signals (the message is encoded into an audio pattern by the encoding device. An encoding process or application, for example, encoding/decoding application 112, may perform the encoding. The encoding/decoding application may perform the encoding using messages 122, see ¶ 0046; Encoding of the audio pattern from messages 122 may be done using an audio encoding technology. The audio encoding technology may utilize a coding scheme where text and/or data is translated into binary values, which may then be encoded. For example, letters and digits may be mapped to a binary value. Below is a table showing text and digits mapped to binary values between 0 and 44, which may be encoded by 6 bits, where C is the character, and V is the binary value, see ¶ 0047), a modulation module to modulate the carrier frequencies (The binary values may be transformed into the audio pattern by utilizing a sound wave, such as an audio sine wave. The sound wave may be high frequency, such as an inaudible 19 kHz sine sound wave, in order to make the sound wave normally inaudible to a user. The high frequency sound wave may correspond to a binary value, for example 1, while silence corresponds to 0. However, in other embodiments, a different pattern may be utilized. For example, a "triple chunk" pattern may be utilized, wherein, when assigning 1 to an audio signal group, a sample of a sine wave with a known length, and a specific inaudible frequency, such as 19 kHz, may be placed between two samples of silence of the same known length to form the group. Additionally, in such an encoding scheme, 0 may also correspond to three samples of silence using the known length. Each sample is designated as a "chunk." Thus, in such an audio pattern, a binary triple of 3 bits is used to indicate the audio pattern of a bit, where 010, corresponding to silence/audio sine wave/silence, is used to indicate 1 and 000, corresponding to silence/silence/silence, is used to indicate 0, see ¶ 0052), a generation module to generation the audio track (Once the message has been encoded into an audio pattern, the encoding device may contain audio content for transmission with the audio pattern. At step 206, the audio pattern encoded at step 204 is further mixed with audio content, such as audio content 126 in database 120. The audio content may correspond to an audio track of an audiovisual content, such as a television broadcast, or may correspond to a separate audio content, such as a radio broadcast. The audio pattern may be mixed with the audio content in such a way so that the audio pattern is inaudible to an audience consuming the audio content, but is perceptible to a user device, such as user device 160, see ¶ 0054), and a transmitting device sending module to transmit the audio track to the transmitting device for transmission through the audio emitting component (Once the message has been encoded into an audio pattern, the encoding device may contain audio content for transmission with the audio pattern. At step 206, the audio pattern encoded at step 204 is further mixed with audio content, such as audio content 126 in database 120. The audio content may correspond to an audio track of an audiovisual content, such as a television broadcast, or may correspond to a separate audio content, such as a radio broadcast. The audio pattern may be mixed with the audio content in such a way so that the audio pattern is inaudible to an audience consuming the audio content, but is perceptible to a user device, such as user device 160, see ¶ 0054); d. a receiving device receiving module to receive the captured audio data from the receiving device, a detection module to detect and decode the inaudible signals within the captured audio data, a decoding module to convert the decoded signals into user-readable data, and a receiving device sending module to send the user- readable data back to the receiving device. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 3 is rejected under 35 U.S.C. 103(a) as being unpatentable over U.S. Patent Publication 2014/0029768 to Hong et al (“Hong”) in view of WIPO Publication 2024/0135941 to Jang et al (“Jang”). As to Claim 3, Hong depending on Claim 1, Hong teaches embedding the audio track into existing audio content (Once the message has been encoded into an audio pattern, the encoding device may contain audio content for transmission with the audio pattern. At step 206, the audio pattern encoded [modulated message] at step 204 is further mixed with audio content, such as audio content 126 in database 120. The audio content may correspond to an audio track of an audiovisual content, such as a television broadcast, or may correspond to a separate audio content, such as a radio broadcast, see ¶ 0054), wherein the embedding process includes: b. dynamically adjusting parameters of the embedding process (a different pattern may be utilized. For example, a "triple chunk" pattern may be utilized, wherein, when assigning 1 to an audio signal group, a sample of a sine wave with a known length, and a specific inaudible frequency, such as 19 kHz, may be placed between two samples of silence of the same known length to form the group [dynamically adjusting parameters of embedding process]. Additionally, in such an encoding scheme, 0 may also correspond to three samples of silence using the known length. Each sample is designated as a "chunk." Thus, in such an audio pattern, a binary triple of 3 bits is used to indicate the audio pattern of a bit, where 010, corresponding to silence/audio sine wave/silence, is used to indicate 1 and 000, corresponding to silence/silence/silence, is used to indicate 0, see ¶ 0052); and c. utilizing frequency masking techniques to hide the audio track within gaps in the frequency spectrum of the existing audio content (a different pattern may be utilized. For example, a "triple chunk" pattern may be utilized, wherein, when assigning 1 to an audio signal group, a sample of a sine wave with a known length, and a specific inaudible frequency, such as 19 kHz, may be placed between two samples of silence of the same known length to form the group [predetermined spacing between carrier frequencies]. Additionally, in such an encoding scheme, 0 may also correspond to three samples of silence using the known length. Each sample is designated as a "chunk." Thus, in such an audio pattern, a binary triple of 3 bits is used to indicate the audio pattern [duration of each modulated message] of a bit, where 010, corresponding to silence/audio sine wave/silence [hide audio track within gaps in the frequency spectrum of existing audio content], is used to indicate 1 and 000, corresponding to silence/silence/silence, is used to indicate 0, see ¶ 0052). Hong does not expressly disclose a. modulating the audio track to match the amplitude and phase characteristics of the existing audio content. Jang teaches a. modulating the audio track to match the amplitude and phase characteristics of the existing audio content (the first frequency band may include a frequency band having greater energy than the second frequency band, see ¶ 0010; the third information may include information on a degree of mixing of the signal components of the first frequency band and the signal components of the second frequency band. For example, the third information may include the energy difference, phase difference and correlation between frequency bands including the signal components of the first frequency band and the signal components of the second frequency band, see ¶ 0062). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Hong with Jang to teach modulating the audio track to match the amplitude and phase characteristics of the existing audio content. The suggestion/motivation would have been in order to compress a wide-band audio signal using a codec operating in a narrow frequency band (see ¶ 0006). Claim(s) 4, 6 are rejected under 35 U.S.C. 103(a) as being unpatentable over U.S. Patent Publication 2014/0029768 to Hong et al (“Hong”) in view of WIPO Publication 2014/101169 to Meng et al (“Meng”) (relied upon English translation). As to Claim 4, Hong depending on Claim 1, Hong does not expressly disclose wherein the modulation of each of the one or more converted message blocks is performed via shift keying techniques. Meng teaches wherein the modulation of each of the one or more converted message blocks is performed via shift keying techniques (in step S240, the number obtained in step S230 is modulated in the high frequency band. Word signal to obtain high frequency digital signals. In this embodiment, the high frequency band may be a high audio segment or an ultrasonic frequency band. Preferably, the high frequency band is in the frequency range of 18 kHz to 22 kHz…the modulation of the signal may include amplitude modulation (ASK), frequency modulation (FSK), and phase modulation (PSK) of the signal. In the present embodiment, frequency modulation or phase modulation can be used, see page 4, 2nd – 3rd para). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Hong with Meng to teach wherein the modulation of each of the one or more converted message blocks is performed via shift keying techniques. The suggestion/motivation would have been in order to modulate digital signals in the high frequency band to obtain high frequency digital signals (see page 2, 5th para). As to Claim 6, Hong depending on Claim 1, Hong does not explicitly disclose wherein the audio track comprises ultrasonic tones. Meng teaches wherein the audio track comprises ultrasonic tones (high frequency band may be a high audio segment or an ultrasonic frequency band. Preferably, the high frequency band is in the frequency range of 18 kHz to 22 kHz, see page 4, 2nd para; after the high frequency digital signal is generated, the generated high frequency digital signal and the original audio data stream are synthesized in step S130 to obtain an enhanced audio data stream [audio track], see page 4, 6th para). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Hong with Meng to teach wherein the audio track comprises ultrasonic tones. The suggestion/motivation would have been in order to modulate digital signals in the high frequency band to obtain high frequency digital signals (see page 2, 5th para). Claim(s) 8, 9 are rejected under 35 U.S.C. 103(a) as being unpatentable over U.S. Patent Publication 2014/0029768 to Hong et al (“Hong”) in view of U.S. Patent Publication 2007/0116324 to Baum et al (“Baum”). As to Claim 8, Hong depending on Claim 1, Hong does not expressly disclose wherein the modulating of each of the one or more converted message blocks onto the one or more carrier frequencies is performed in parallel across multiple carrier frequencies. Baum teaches wherein the modulating of each of the one or more converted message blocks onto the one or more carrier frequencies is performed in parallel across multiple carrier frequencies (The pseudo noise sequences are modulated one or more carrier frequencies which are inserted at one or more frequency bands into the spectrum of an audio signal, see Abstract). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Hong with Baum to teach wherein the modulating of each of the one or more converted message blocks onto the one or more carrier frequencies is performed in parallel across multiple carrier frequencies. The suggestion/motivation would have been in order to insert an information signal that can be used for watermarking digital audio signals (see Abstract). As to Claim 9, Hong and Baum depending on Claim 8, Hong teaches wherein the predefined frequency bands are selected from frequencies above 16,000 Hz and below 300 Hz (As an audio pattern frequency of 19 kHz is at the upper limit of perceivable sounds by an ordinary person across all age ranges, a 19 kHz wave may be generally used. However, lower frequency waves may be used where the population is known to be of a different demographic. In generally, a sound wave of 15 kHz to 20 kHz may be used and be made inaudible [number of frequencies used for encoding] to the ordinary person depending on the target demographic, see ¶ 0021). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to EBONI N GILES whose telephone number is (571)270-7453. The examiner can normally be reached Monday - Friday 9 am - 6 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PATRICK EDOUARD can be reached at (571)272-7603. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EBONI N GILES/Examiner, Art Unit 2622 /PATRICK N EDOUARD/Supervisory Patent Examiner, Art Unit 2622
Read full office action

Prosecution Timeline

Jun 27, 2024
Application Filed
Jan 27, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602962
CONTACTLESS OPTICAL INTERNET OF THINGS USER IDENTIFICATION DEVICE AND SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12599835
WEARABLE CONTROLLER
2y 5m to grant Granted Apr 14, 2026
Patent 12596895
LOW POWER BEACON SCHEDULING
2y 5m to grant Granted Apr 07, 2026
Patent 12575179
DISPLAY DEVICE AND METHOD FOR DRIVING DISPLAY DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12573294
SYSTEMS AND METHODS FOR ALERTING PERSONS OF APPROACHING VEHICLES
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
63%
Grant Probability
72%
With Interview (+8.6%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 697 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month