Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . + Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claims 1-3 and 5-9 are pending. Claim 1 is independent.
Claims 2-3 and 5-9 depend from Claim 1.
Claim 4 is cancelled.
This Application was published as U.S. 2024/0194217.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 30 Dec 2025 has been entered.
Response to Amendment
Examiner thanks Applicant for the response filed on 30 Dec 2025 which has been correspondingly accepted and considered in this office action. Claims 1-3 and 5-9 are pending.
Response to Arguments
With regards to Claim Rejections - 35 USC § 103, Applicant has provided arguments, see pages 5-10, filed 30 Dec 2025 and amended claim 1. As a result, amendments to claims and arguments have been fully considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The rejections of 35 USC § 103 will be maintained.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, and 6-9 are rejected under 35 U.S.C. 103 as being unpatentable over Mortensen et al. (US2019/0355383 hereinafter Mortensen) in view of Yan (US2023/0086735 hereinafter Yan), Kane (US2022/0201121 hereinafter Kane), Steiner et al. (US2023/0062377 hereinafter Steiner) in further view of Sharma et al.(US2023/0230599 hereinafter Sharma).
With regards to claim 1, Mortensen teaches:
A data processing method for acoustic event comprising performing a plurality of steps by a processor, [Fig 3, item 304] wherein the plurality of steps comprises: establishing a simulated acoustic frequency event module, [Fig 5, item 500-501] a data capturing module, [Fig 5, item 500-501] and a sound application decision module in a software manner, [Fig 5, item 516, Par [0229]]
wherein the simulated acoustic frequency event module comprises: a plurality of frequency band filter modules, [Fig 5, item 501, 502, and 504] a plurality of energy estimation modules connecting to the plurality of frequency band filter modules, [Fig 5, item 500, 501, and 504] and a plurality of frequency event quantizers connecting to the plurality of energy estimation modules [Fig 11, item 1104 where Fig 11 shows a one channel configuration where an analog digital converter (ADC) is a frequency event quantizer; however Mortensen teaches “More channels or pairs of channels can be used to detect different types of voices to improve detection and/or to detect voices present in different audio streams”(Par [0004])]
setting at least one of the plurality of frequency band filter modules, the plurality of energy estimation modules and the plurality of frequency event quantizers according to a simulated hardware parameter; [Mortensen Par [0098] teaches frequency band filter module can have simulated hardware parameters such that “Any suitable filter can be used for reducing the bandwidth of the incoming audio stream to just the first frequency band, e.g., the frequency band of interest which covers a reasonable number of vowel F1 formant frequencies”]
inputting a sound signal [Fig 5 incoming audio stream and Fig 11 item 1102] to the plurality of frequency band filter modules [Fig 5 and 11] and obtaining a plurality of metadata from the plurality of frequency event quantizers, wherein the sound signal is an analog electric signal and the plurality of metadata is digital signals; [Mortensen teaches obtaining a plurality of metadata or digital from the plurality of ADCs or frequency event quantizers in Fig 11 with output into decision module (516), where more than one channel can be used (Par [0004]) for the plurality of frequency event quantizers. Analog electrical signals can be obtained from the plurality of frequency band filters (502) shown in Fig 5 for 2 channels with output into decision module(516)]
wherein the simulated hardware parameter is configured to be assigned to the plurality of frequency event quantizers, and the simulated hardware parameter comprises a data dynamic range, [Mortensen Fig 20 item 612, Par [0177] teaches “crest detector has a top tracker which tracks the peaks of the signal and a bottom tracker which tracks the quiet periods of the signal. The difference between these two is the modulation index of the signal” (Par [0112]) where the modulation index is the dynamic data range]
a number of channels, [Mortensen Fig 5 and 11] a number of the plurality of frequency event quantizers is equal to the number of channels, [Mortensen Fig 11]
With regards to claim 1, Mortensen fails to teach:
dividing each of the plurality of metadata into a plurality of frames according to a time interval by the data capturing module, wherein each of the plurality of frames has a timestamp;
accumulating an event number of each of the plurality of frames by the data capturing module, setting a label of each of the plurality of frames according to the event number, and storing the plurality of frames, the event number and the label in a database; and
training a decision model by the sound application decision module according to the database and a sound application.
With regards to claim 1, Yan teaches:
dividing each of the plurality of metadata into a plurality of frames according to a time interval by the data capturing module, [Yan Fig 4a teaches visual relationship system (102) or data capturing module “dividing the video 114 into subsections, e.g., 30 second clips of video, and then selecting a representative frame of each subsection to be the key frame 11” (Par [0057]) where video consists of a plurality of digital signals or metadata which is divided into a plurality of frames according to a time interval by the data capturing module]
wherein each of the plurality of frames has a timestamp; [Fig 4a item 207, Par [0057]]
accumulating an event number of each of the plurality of frames by the data capturing module, [Yan Par [0057] teaches “a key frame 115 can be associated with a timestamp 207 that indicates a frame number of a total number of frames of the video 114, for example, 127/400, where the key frame 115 appears at the 127th frame of a total of 400 frames of the video 11”]
setting a label of each of the plurality of frames according to the event number, and [Yan Par [0057] teaches a “label assigned to the key frame 115” … [and] “a key frame 115 can be associated with a timestamp 207 that indicates a frame number of a total number of frames of the video 114”]
storing the plurality of frames, the event number and the label in a database; and [Yan Par [0057] teaches “timestamp 207 can be associated with the key frame 115 in an index/table,” “timestamp 207 can be a label assigned to the key frame 115” and “key frame 115 can be associated with a timestamp 207 that indicates a frame number of a total number of frame,” where index/table can be to develop “scene graph index can be a lookup table that identifies each key frame and its corresponding scene graph and timestamp, as depicted in FIG. 2A” which is stored in scene graph database (218).
It would be obvious to one of ordinary skill in the art to combine the digital signals from the voice activity detection system and as taught by Mortensen with the creating of frames and labels for the digital signals as taught by Yan. The motivation to combine the teachings of a Mortensen with the teachings of Yan is because Yan teaches creating an “index/table” (Par [0057]) with a timestamp and label that makes it easier to search” (Par [0015]) which increases the capabilities of the invention of Mortensen]
With regards to claim 1, Mortensen in view of Yan fails to teach:
training a decision model by the sound application decision module according to the database and a sound application.
With regards to claim 1, Kane teaches:
training a decision model by the sound application decision module according to the database and a sound application. [Kane Par [0048] teaches training “various machine learning models” which is a decision model by computing features such as “acoustic measurements, such as pitch, energy, voice activity detection, speaking rate, turn-taking characteristics, and time-frequency spectral coefficient” which are sound applications, and using “labeled training data in the behavior training database 116.”
It would be obvious to one of ordinary skill in the art to combine the digital signals from the voice activity detection system and as taught by Mortensen with the creating of frames and labels for the digital signals as taught by Yan, with the training a machine learning model as taught by Kane. The motivation to combine the teachings of a Mortensen and Yan with the teachings of Kane is because Kane teaches “compute features used as input to machine learning models (such models may be developed offline and, once developed, can make inferences in real-time” (Par [0034]) which increases the capabilities of the invention of Mortensen in view of Yan to a real time process]
With regards to claim 1, Mortensen in view of Yan and Kane fails to teach:
a bit width, a time resolution, a time interval and the plurality of frequency event quantizers are configured to output a first value representing that an event occurs when an energy of an input signal is greater than a threshold, and output a second value representing that the event does not occur when the energy of the input signal is smaller than the threshold.
With regards to claim 1, Steiner teaches:
a bit width, [Steiner Par [0028] teaches “bit width of the ADC can be, for example, 8 bit or higher]
a time resolution, [Steiner Par [0028] teaches “ADC sampling frequency may be set to 5 msps” where time resolution may be calculated as the inverse of the sampling frequency which equals 20ns]
a time interval and [Steiner teaches “predetermined observation window has a start time and an end time” (Par [0028] … “For example, 10 us or 20 us are possible observation window durations” (Par [0031]
the plurality of frequency event quantizers are configured to output a first value representing that an event occurs when an energy of an input signal is greater than a threshold, [Steiner teaches “If the Euclidian distance dNoTouch is equal to or greater than the first threshold value, the evaluation processing circuit 207 may detect a touch event” (Par [0049]) where the value is a touch event]
and output a second value representing that the event does not occur when the energy of the input signal is smaller than the threshold. [Steiner teaches “If the Euclidian distance dNoTouch is less than the first threshold value Threshold 1, the evaluation processing circuit 207 may detect a no-touch event” (Par [0049]) where the value is a no touch event]
It would be obvious to one of ordinary skill in the art to combine voice activity detection system and as taught by Mortensen in view of Yan and Kane with the analog to digital converter (ADC) and touch sensor as taught by Steiner. The motivation to combine the teachings of a Mortensen, Yan, and Kane with the teachings of Steiner is because Steiner teaches using a touch sensor that “relies on the transmission of an ultra-sonic signal and the reception and processing of the reflected waveform from the touch surface” (Steiner Par [0001]) which increases the capabilities of the invention of Mortensen in view of Yan and Kane in an expanded acoustic range which allows the potential for other applications such as a touch sensor]
With regards to claim 1, Mortensen in view of Yan, Kane, and Steiner fails to teach:
wherein each of the plurality of metadata comprises an event-indicative time-series including a plurality of event indicators, each event indicator indicating occurrence or non-occurrence of an event at a corresponding time;
With regards to claim 1, Sharma teaches:
wherein each of the plurality of metadata comprises an event-indicative time-series including a plurality of event indicators, each event indicator indicating occurrence or non-occurrence of an event at a corresponding time; [Sharma Fig 4 teaches augmentation process (10) creates event indicators where speech is active or inactive (Par [0067]) which are indicative of occurrence or non-occurrence of an event where “augmentation process 10 may generate acoustic metadata with timestamps indicating portions of first device speech signal 402 that include speech activity (e.g., start and end times for each portion)” (Par [0067])]
It would be obvious to one of ordinary skill in the art to combine voice activity detection system and as taught by Mortensen in view of Yan, Kane, and Steiner with the data augmentation system as taught by Sharma. The motivation to combine the teachings of a Mortensen, Yan, Kane, and Steiner with Sharma is because Sharma teaches “data augmentation may allow for the generation of new training data for a machine learning system by augmenting existing data to represent new conditions” (Par [0002]) which increases the accuracy of the voice activity detection invention of Mortensen in view of Yan, Kane, and Steiner]
With regard to claim 6, Mortensen in view of Yan, Kane, Steiner, and Sharma teaches:
All the limitations of claim 1
further comprising adjusting the simulated hardware parameter by the processor according to an accuracy of the decision model, [Mortensen Par [0222] teaches “output indicating whether voice activity detection can have discrete levels indicating varying probabilities that voice activity is present” where outputting probabilities can determine the accuracy of the decision model]
an adjusted record of the simulated hardware parameter. [Mortensen Fig 25 Par [0197] teaches a registry file map that “serves as storage for audio input, parameters for the computations, calculated values as well as mapped locations for CSR's (control status registers), etc” which records parameters and can be adjusted]
With regards to claim 6, Mortensen in view of Yan fails to teach:
an accuracy threshold, and
With regards to claim 6, Kane teaches:
an accuracy threshold, and [Kane Par [0079] teaches “machine learning model outputs is typically a probability, so this needs to be binarized by applying a threshold.”
It would be obvious to one of ordinary skill in the art to combine the teachings of Mortensen in view of Yan that teaches determining the accuracy of the decision model with probability with the teachings of Kane that teaches using a threshold on the probability outputs from the machine learning model. The motivation to combine the teachings of Mortensen and Yan with the teachings of Kane is because Kane teaches “compute features used as input to machine learning models (such models may be developed offline and, once developed, can make inferences in real-time” (Par [0034]) which increases the capabilities of the invention of Mortensen in view of Yan to a real time process]
With regard to claim 7, Mortensen in view of Yan, Kane, Steiner, and Sharma teaches:
All the limitations of claim 1
wherein the sound application comprises a voice activity detection, [Mortensen Fig 5, Par [0097]]
a keyword spotting, [Mortensen Par [0204] teaches “frequency bands can be adjusted based on one or more pre-defined utterance/phrase. Specifically, the frequency band of a particular channel can be tuned for one or more specific vowels of interest. For instance, various voice activated programs triggers when a user utters or say a particular keyword” which describes keyword spotting]
an acoustic environment identification, [Mortensen Par [0146] teaches “processor performing the process triggered by the output signal(s) of the decision modules can in some cases select a suitable process based on the information inferred. The resulting system can be more aware of the environment near these audio capturing devices, and thus provide contextually aware processes in response to the outputs of the voice activity detector” which describes an acoustic environment identification]
an acoustic abnormal sound detection, and [Mortensen Par [0196] teaches “updating of one or more parameters based on environmental conditions (e.g., level of noise in the environment)” where the level of noise can abnormal sound]
an output number of the decision model is associated with the sound application. [Mortensen teaches “the decision module 516 can be provide a counting filter (absorbing the low pass filtering module 514)” (Par [0104]) where the “decision module 516 is configured to output a signal to indicate that voice activity is detected in the audio stream” (Par [0104]) and the count is the number of detected voice activities]
With regards to claim 7, Mortensen in view of Yan fails to teach:
the decision model is a fully connected neural network,
With regards to claim 7, Kane teaches:
the decision model is a fully connected neural network, [Kane Par [0078] teaches training the decision model by the sound application decision module according to the database and the sound application via “Supervised machine learning using neural networks.”
It would be obvious to one of ordinary skill in the art to combine the digital signals from the voice activity detection system and as taught by Mortensen with the creating of frames and labels for the digital signals as taught by Yan, with the training a machine learning model using supervised learning as taught by Kane. The motivation to combine the teachings of a Mortensen and Yan with the teachings of Kane is because Kane teaches “compute features used as input to machine learning models (such models may be developed offline and, once developed, can make inferences in real-time” (Par [0034]) which increases the capabilities of the invention of Mortensen in view of Yan to a real time process]
With regards to claim 7, Mortensen in view of Yan and Kane fails to teach:
an ultrasonic vibration detection, and
With regards to claim 7, Steiner teaches:
an ultrasonic vibration detection, and [Steiner Fig 1 item 106-7, Par [0021] teaches “receiver (RX) 106 configured to receive reflected ultra-sound signals, and a sensor circuit 107 (e.g., an application specific integrated circuit (ASIC)) configured to generate the ultra-sound signals for transmission by the transmitter” which describes ultrasonic vibration detection.
It would be obvious to one of ordinary skill in the art to combine voice activity detection system and as taught by Mortensen in view of Yan and Kane with the analog to digital converter (ADC) and touch sensor as taught by Steiner. The motivation to combine the teachings of a Mortensen, Yan, and Kane with the teachings of Steiner is because Steiner teaches using a touch sensor that “relies on the transmission of an ultra-sonic signal and the reception and processing of the reflected waveform from the touch surface” (Steiner Par [0001]) which increases the capabilities of the invention of Mortensen in view of Yan and Kane in an expanded acoustic range which allows the potential for other applications such as a touch sensor]
With regards to claim 8, Mortensen in view of Yan, Kane, Steiner, and Sharma teaches:
All the limitations of claim 1
With regards to claim 8, Mortensen in view of Yan fails to teach:
wherein training the decision model by the sound application decision module according to the database and the sound application is a supervised learning.
With regards to claim 8, Kane teaches:
wherein training the decision model by the sound application decision module according to the database and the sound application is a supervised learning. [Kane Par [0078] teaches training the decision model by the sound application decision module according to the database and the sound application via “Supervised machine learning using neural networks.”
It would be obvious to one of ordinary skill in the art to combine the digital signals from the voice activity detection system and as taught by Mortensen with the creating of frames and labels for the digital signals as taught by Yan, with the training a machine learning model using supervised learning as taught by Kane. The motivation to combine the teachings of a Mortensen and Yan with the teachings of Kane is because Kane teaches “compute features used as input to machine learning models (such models may be developed offline and, once developed, can make inferences in real-time” (Par [0034]) which increases the capabilities of the invention of Mortensen in view of Yan to a real time process]
With regard to claim 9, Mortensen in view of Yan, Kane, Steiner, and Sharma teach:
All the limitations of claim 1
wherein a value setting of the simulated hardware parameter is associated with the sound application. [Mortensen teaches “More channels or pairs of channels can be used to detect different types of voices to improve detection and/or to detect voices present in different audio streams”(Par [0004]) where the number of channels is a value setting of the simulated hardware parameter is associated with the sound application]
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Mortensen et al. (US2019/0355383) in view of Yan (US2023/0086735), Kane (US2022/0201121), Steiner et al. (US2023/0062377), and Sharma et al.(US2023/0230599) in further view of Hussin et al. (.S. F. Hussin, G. Birasamy, and Z. Hamid, “Design of Butterworth Band-Pass Filter”, Politeknik & Kolej Komuniti Journal of Engineering and Technology, vol. 1, no. 1, pp. 32–46, Nov. 2016) hereinafter Hussin.
With regards to claim 2, Mortensen in view of Yan, Kane, Steiner, and Sharma teaches:
All the limitations of claim 1
wherein the simulated hardware parameter is configured to be assigned to the plurality of frequency band filter modules, and the simulated hardware parameter comprises, a frequency lower limit, [Mortensen Fig 11, item 1108 bandpass filter] a frequency upper limit, [Mortensen Fig 11, item 1108 bandpass filter] a filter bandwidth, [Mortensen Fig 11, item 1108 bandpass filter] a filter method, [Mortensen Fig 11, item 1108 bandpass filter] and a number of channels, [Mortensen Fig 5 and 11] and a number of the plurality of frequency band filter modules is equal to the number of channels [Mortensen Fig 5 and 11]]
With regards to claim 2, Mortensen in view of Yan, Kane, Steiner, and Sharma fails to teach:
a filter gain
a central frequency,
a filter order,
With regards to claim 2, Hussin teaches:
a filter gain [Hussin Fig 4, pg 34-35 teaches design “Bandpass Butterworth filter need to design in this paper must have the characteristic as shown in figure 4” which describes gain and frequency parameters]
a central frequency, [Hussin pg 33 teaches “Transmitted and received signals have to be filtered at a certain center frequency with a specific bandwidth” where central or center frequency will depend on lower and upper frequency bounds or bandwidth]
a filter order, [Hussin pg 41 teaches “Procedure of designing LPF and HPF is divided into two parts, the first part is finding the required order of the filter” where the low and high pass filter (LPF and HPF respectfully) are determined based on the order of the filter.
It would be obvious to one of ordinary skill in the art to combine voice activity detection system and as taught by Mortensen in view of Yan, Kane, Steiner, and Sharma with the bandpass filter as taught by Hussin. The motivation to combine the teachings of a Mortensen, Yan, Kane, Steiner, and Sharma with the teachings of Hussin is because “By providing suitable bandpass filters and channels, formant filtering can be used to detect bird sounds (or bird speech)” (Mortensen Par [0230]) which increases the capabilities of the invention of Mortensen in view of Yan, Kane, Steiner, and Sharma to better detect sounds]
Claims 3 and 5 are rejected under 35 U.S.C. 103 as being unpatentable over Mortensen et al. (US2019/0355383) in view of Yan (US2023/0086735), Kane (US2022/0201121), Steiner et al. (US2023/0062377), and Sharma et al.(US2023/0230599) in further view in further view of Mansour et al. (US2019/0028130 hereinafter Mansour).
With regards to claim 3, Mortensen in view of Yan, Kane, Steiner, and Sharma teaches:
All the limitations of claim 1
wherein the simulated hardware parameter is configured to be assigned to the plurality of energy estimation modules, and the simulated hardware parameter comprises an energy gain, [Mortensen Fig 20 item 606, Fig 23, Par [0183] where the running average is an energy gain] an energy threshold, [Mortensen Fig 20, item 614, Par [01777]] and a number of channels, [Mortensen Fig 5 and 11] a number of the plurality of energy estimation modules is equal to the number of channels, [Mortensen Fig 5 and 11]
With regards to claim 3, Mortensen in view of Yan, Kane, Steiner, and Sharma fails to teach:
and the plurality of energy estimation modules are implemented by a waveform rectifier.
With regards to claim 3, Mansour teaches:
and the plurality of energy estimation modules are implemented by a waveform rectifier. [Mansour Fig 1, item 140, Par [0040] teaches calculation unit (140) is an energy estimation module for signals that can “represent energy, a power and in general, the presence of oscillations or amplitude in the particular associated band-pass filtered signal” that is implemented by rectifier (R1 through Rn) for waveforms from the band pass filter.
It would be obvious to one of ordinary skill in the art to combine voice activity detection system and as taught by Mortensen in view of Yan, Kane, Steiner, and Sharma with the rectifier as taught by Mansour. The motivation to combine the teachings of a Mortensen, Yan, Kane, Steiner, and Sharma with the teachings of Mansour is because “signal parameters can be calculated in the calculation step which each represent an energy and/or power of a band-pass filtered signal. Such an embodiment of the proposed approach offers the advantage of providing highly meaningful information by means of the signal parameter, which enables an easily implemented inference as to the relevance or the information contained in the band-pass filtered signa” (Mansour Par [0016]) which increases the capabilities of the invention of Mortensen in view of Yan, Kane, Steiner, and Sharma to provide information in the band pass signal processing]
With regards to claim 5, Mortensen in view of Yan, Kane, Steiner, and Sharma teaches:
All the limitations of claim 1
before inputting the sound signal to the plurality of band filter modules, further comprising: establishing an amplifier in the software manner; and [Mortensen Fig 11 item 1104, Par [0177] teaches “it is possible to not only implement the model in software embodied in non-transient computer-readable medium, it is possible to implement the model in hardware”]
inputting an audio stream into the amplifier to generate the sound signal, [Mortensen Fig 11 item 1104, Par [0150]]
With regards to claim 5, Mortensen in view of Yan, Kane, Steiner, and Sharma fails to teach:
wherein an output of the plurality of frequency band filter modules and an output of the plurality of energy estimation modules are one of voltage, current, and charge.
With regards to claim 5, Mansour teaches:
wherein an output of the plurality of frequency band filter modules and an output of the plurality of energy estimation modules are one of voltage, current, and charge. [Mansour Par [0040] teaches each “signal parameters can be calculated in the calculation step which each represent an energy and/or power of a band-pass filtered signal” where voltage, current, and charge can be calculated given power and energy, since power can be calculated from Ohm’s law as the product of current and voltage, and current is equal to the change in charge over time.
It would be obvious to one of ordinary skill in the art to combine voice activity detection system and as taught by Mortensen in view of Yan, Kane, Steiner, and Sharma with the rectifier as taught by Mansour. The motivation to combine the teachings of a Mortensen, Yan, Kane, Steiner, and Sharma with the teachings of Mansour is because “signal parameters can be calculated in the calculation step which each represent an energy and/or power of a band-pass filtered signal. Such an embodiment of the proposed approach offers the advantage of providing highly meaningful information by means of the signal parameter, which enables an easily implemented inference as to the relevance or the information contained in the band-pass filtered signal” (Mansour Par [0016]) which increases the capabilities of the invention of Mortensen in view of Yan, Kane, Steiner, and Sharma to provide information in the band pass signal processing]
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Joseph J Yamamoto whose telephone number is (571)272-4020. The examiner can normally be reached M-F 1000-1800 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached at 571-272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JOSEPH J. YAMAMOTO
Examiner
Art Unit 2656
/BHAVESH M MEHTA/Supervisory Patent Examiner, Art Unit 2656