Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on11/21/2023 is considered by the examiner.
Drawings
The drawing submitted on 11/21/2023 is considered by the examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2, 15, and 20 rejection are withdrawn based on amendment.
Response to Amendment
Claims 1-20 are currently pending in the application and among them claims 1, 14, and 19 are independent claims and claims 2, 15, and 20 have been amended.
Response to Arguments
Applicant's arguments filed 1/27/2026 have been fully considered but they are not persuasive for the following:
Applicant Arguments: It is believed that Streit fails to disclose or suggest the claim 1. The Office Action page 3 took the position that the disclosure of Streit that "acquire voice samples using a sliding time (e.g., 10 ms) window across, for example, one second of input" reads on the feature of Applicant's claim 1 of "the time-dependent vocal characteristic crossing a threshold value". Time itself is not a "time-dependent vocal characteristic" as recited in Applicant's claim 1. Thus, the sliding time window (e.g., a scalar value of the time itself having accrued) reaching some pre-determined value does not read on the language of Applicant's claim 1 of "the time- dependent vocal characteristic crossing a threshold value".
Examiner Response: Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references.
Applicant just argues on the examiner cited prior art teaching stating that it does not teach the claimed limitation (Time itself is not a "time-dependent vocal characteristic" ) but does not elaborate what is time-dependent vocal characteristics either from claim or based on term definition supporting applicant specification. In addition, applicant argument does not clarify, how the examiner prior art teaching of “acquire voice samples using a sliding time (e.g., 10 ms window)” is not or different based on the claimed and/or specification cited definition of the limitation "the time- dependent vocal characteristic ". Nothing applicant present in the argument are based on the applicant specification or claim defining the term and nothing in the specification that will support the applicant argument. Since the limitation is broad examiner is entitled to interpret the limitation broadly in light of the specification without importing any limitation form the specification directly into the claims (See MPEP 2111.01 I, “Under a broadest reasonable interpretation (BRI), words of the claim must be given their plain meaning, unless such meaning is inconsistent with the specification. The plain meaning of a term means the ordinary and customary meaning given to the term by those of ordinary skill in the art at the relevant time. The ordinary and customary meaning of a term may be evidenced by a variety of sources, including the words of the claims themselves, the specification, drawings, and prior art. However, the best source for determining the meaning of a claim term is the specification”. Also see MPEP 2111.01 II, "Though understanding the claim language may be aided by explanations contained in the written description, it is important not to import into a claim limitations that are not part of the claim. For example, a particular embodiment appearing in the written description may not be read into a claim when the claim language is broader than the embodiment." Superguide Corp. v. DirecTV Enterprises, Inc., 358 F.3d 870, 875, 69 USPQ2d 1865, 1868 (Fed. Cir. 2004). ).
Applicant specification disclosed on the limitation "the time- dependent vocal characteristic crossing a threshold value" on paragraph [0012] as “The present embodiments identify when the time-dependent vocal characteristic crosses a threshold value to determine the suitable time point for generating the segment or chunk of audio data.”, in [0033] “Thus, the threshold value itself is used as a dividing point for dividing the acoustic recording file into a chunk for sending to the machine learning model.”, in [0035] “This threshold value presents an appropriate place for dividing the audio recording into a chunk.”, and in [0054] “In step 206 of the vocal characteristic-based audio data chunking process 200, the audio data is examined by time frame and to obtain a time-dependent vocal characteristic.”
Prior art Streit teaching of “acquire voice samples using a sliding time (e.g., 10 ms window)” is examined for authentication/identification purpose from a 1 second or 5 second long of voice sample, is a time-dependent voice sample (i.e. 1 second to 5 second voice samples and/or 10 ms segments/chunk) with threshold of 10 ms, which is exactly what applicant specification teaches with respect to “the time- dependent vocal characteristic” and “crossing a threshold value". Streit specification further teaches in: [0050] Processing of the audio data can include capturing samples of time segments from the audio information.”. Streit further teach: “[0082] According to one embodiment, the liveness component can be configured to generate a random set of biometric instances that the system requests a user submit. The random set of biometric instances can serve multiple purposes. For example, the biometric instances provide a biometric input that can be used for identification, and can also be used for liveness (e.g., validate matching to random selected instances). [0083] According to one embodiment, the liveness component 718 is configured to generate a random set of words that provide a threshold period of voice data from a user requesting authentication. [0103] In the voice biometric acquisition space, helper networks (e.g., helper DNNs) can be configured to isolate singular voices, and voice geometry voice helper networks can be trained to isolate single voices in audio data. In another example, helper network processing can include voice input segmentation to acquire voice samples using a sliding time (e.g., 10 ms) window across, for example, one second of input. [0121] In various embodiments, the system is configured to require>50 10 ms voice samples, to establish a desired level of accuracy and performance. In one example, the system is configured to capture voice instances based on a sliding 10 ms window that can be captured across one second of voice input, which enables the system to reach or exceed 50 samples. [208] According to one embodiment, processing of a voice biometric can continue at 1008 with capture of at least a threshold amount of the biometric (e.g., 5 second of voice).
Therefor similar to the applicant specification Streit teaching of voice sample are acquired based on threshold amount of time (examiner can interpret as time-dependent as well) i.e. 1 second or 5 second long of voice data where each segmented data or chunk is 10 ms (threshold which examiner can interpret as time dependent as well) long.
Applicant specification especially in “[0012] similarly teaches that “The present embodiments identify when the time-dependent vocal characteristic crosses a threshold value to determine the suitable time point for generating the segment or chunk of audio data.”, and in “[0054] “In step 206 of the vocal characteristic-based audio data chunking process 200, the audio data is examined by time frame and to obtain a time-dependent vocal characteristic. , as Streit audio/voice sample teaching is required to authenticate a user identity and liveness.
For authentication, Streit teaching collect voice sample in a time segment of 10 ms (as well could be interpret as time dependent voice sample or voice biometrics with segment or chunk threshold of 10 ms) from time dependent 1 second to 5 second (varies on situation) of voice data from a user in order to authenticate the user identity based on verifying of user voice biometrics and liveness.
Examiner interpretation further supported by Streit teaching in [0135] According to one example, input voice data is transformed based on voice pulse code modulation (PCM). Processing of the audio data can include capturing samples of time segments from the audio information. [0103] …processing can include voice input segmentation to acquire voice samples using a sliding time (e.g., 10 ms) window across, for example, one second of input. [0121] In various embodiments, the system is configured to require>50 10 ms voice samples, to establish a desired level of accuracy and performance. In one example, the system is configured to capture voice instances based on a sliding 10 ms window that can be captured across one second of voice input, which enables the system to reach or exceed 50 samples. [0187] The system generates random text that is selected to take roughly 5 seconds to speak (in whatever language the user prefers—and with other example threshold minimum periods). The user reads the text and the system (e.g., implemented as a private biometrics cloud service or component) then captures the audio and performs a speech to text process, comparing the pronounced text to the requested text. The system allows, for example, a private biometric component to assert the liveness of the requestor for authentication. In conjunction with liveness, the system compares the random text voice input and performs an identity assertion on the same input to ensure the voice that spoke the random words matches the user's identity.
[208] According to one embodiment, processing of a voice biometric can continue at 1008 with capture of at least a threshold amount of the biometric (e.g., 5 second of voice).
Therefore, in Streit teaching, 10 ms voice sample/characteristics can be interpretated as time-dependent voice samples/characteristics where10 ms as threshold for segment or chunk, and further length of the acquired voice samples i.e. 1 seconds or 5 seconds, could also be read on as time-dependent voice samples/characteristic as per applicant specification and Streit teaching specifically in [0103] The system is configured to capture voice instances based on a sliding 10 ms window that can be captured across one second of voice input, in [0187] that the selected text to take roughly 5 seconds to speak and in [0208] capture of at least a threshold amount of the biometric (e.g., 5 second of voice).
Since, nor the applicant specification or the claim clearly specify and/or define what is time-dependent vocal characteristics, within the limitation of “time- dependent vocal characteristic crossing a threshold value”, therefore the term time-dependent is interpreted under broadest reasonable interpretation per MPEP as cited above and interpretation of either 10ms voice sample segments/chunks as time-dependent voice samples/characteristics or length of voice sample/characteristics of 1 second or 5 seconds as time dependent voice samples/ characteristics are also aligned with broadest interpretation of the time-dependent limitation in light of the specification.
Therefore, applicant argument with respect to the limitation the time- dependent vocal characteristic crossing a threshold” corresponding to Streit teaching is not persuasive. All rejection therefore remain same.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Streit (US 2021/0141896 A1).
Regarding Claims 1, 14, and 19, Streit teaches: A computer-implemented method comprising: receiving audio data ([0185] According to one embodiment, an authentication system, assesses liveness by asking the user to read a few random words or a random sentence. ); examining the audio data by time frame and to obtain a time-dependent vocal characteristic of the audio data; in response to the time-dependent vocal characteristic crossing a threshold value at a first time point (acquire voice samples using a sliding time (e.g., 10 ms) window across, for example, one second of input.), creating a first chunk of the audio data from the audio data from a beginning time point to the first time point(voice input segmentation to acquire voice samples using a sliding time) ([0103] In another example, helper network processing can include voice input segmentation to acquire voice samples using a sliding time (e.g., 10 ms) window across, for example, one second of input. In some embodiments, processing of voice data includes pulse code modulation transformation that down samples each time segment to 2× the frequency range, which may be coupled with voice fast fourier transforms to convert the signal from the time domain to the frequency domain. [0135] According to one example, input voice data is transformed based on voice pulse code modulation (PCM). Processing of the audio data can include capturing samples of time segments from the audio information.); sending the first chunk of the audio data to a speech recognition machine learning model ([0215] For example, at 1020 the voice input is processed to determine if the input words matches the set of random words requested. In one embodiment, a speech recognition function is executed to determine the words input, and matching is executed against the randomly requested words to determine an accuracy of the match.); and iteratively repeating the examining, the creating, and the sending for additional chunks of the audio data ([0045] According to various embodiments, the phases of operation are complimentary and can be used sequentially, alternatively, or simultaneously, among other options. For example, the first phase can be used to prime the second phase for operation, and can do so repeatedly. [0050] According to one embodiment, responsive to generating a prediction by the classification network, the system is configured to execute a validation of the results. According to some embodiments, where the elements of the array do not meet a threshold for valid identification, the system can be configured to execute subsequent validation. According to one example, input voice data is transformed based on voice pulse code modulation (PCM). Processing of the audio data can include capturing samples of time segments from the audio information. [0224] In various implementations, the system is configured to sample the resulting data and use this sample as input to a Fourier transform. In one example, the resulting frequencies are used as input to a pre-trained voice neural network capable of returning a set of embeddings (e.g., encrypted voice feature vectors). These embeddings, for example, sixty four floating point numbers, provide the system with private biometrics which then serve as input to a second neural network for classification.).
Regarding Claims 2, 15, and 20, Streit teaches: The computer-implemented method of claim 1, further comprising: in response to a pre-determined time threshold value being exceeded at an additional time point, creating a second chunk of the audio data from the first time point to the additional time point; and sending the second chunk of the audio data to the speech recognition machine learning model (See rejection of claim 1 and [0083] In one example, the system is configured to require a five second voice signal for processing, and the system can be configured to select the random biometric instances accordingly. Other thresholds can be used (e.g., one, two, three, four, six, seven, eight, nine seconds or fractions thereof, among other examples), each having respective random selections that are associated with a threshold period of input. [0103] In another example, helper network processing can include voice input segmentation to acquire voice samples using a sliding time (e.g., 10 ms) window across, for example, one second of input. [0121] In one example, the system is configured to capture voice instances based on a sliding 10 ms window that can be captured across one second of voice input, which enables the system to reach or exceed 50 samples. Note: It is inherent from the above teaching that each 10ms window within 1 second or 5 second of voice signal will start voice segmentation each with 10ms segment/chunk for additional time point of 10 ms up to 50 segments/chunks of voice samples within 5 seconds of recorded voice sample.).
Regarding Claims 3 and 16, Streit teaches: The computer-implemented method of claim 1, wherein the first chunk of the audio data is sent to an encoder (neural network) of the speech recognition machine learning model (See rejection of claim 1 and [0054] In another example, the inventors have created a first neural network for processing plain or unencrypted voice input. The voice neural network is used to accept unencrypted voice input and to generate embeddings or feature vectors that are encrypted and Euclidean measurable for use in training another neural network. In various embodiments, the first voice neural network generates encrypted embeddings that are used to train a second neural network, that once trained can generate predictions on further voice input (e.g., match or unknown). In one example, the second neural network (e.g., a deep neural network—DNN) is trained to process unclassified voice inputs for authentication (e.g., predicting a match). ).
Regarding Claims 4 and 17, Streit teaches: The computer-implemented method of claim 1, wherein the speech recognition machine learning model is a recurrent neural network transducer (See rejection of claim 1 and [0202] Various neural networks can be used to accept plaintext behavioral information as input and output distance measurable encrypted feature vectors. According to one example, the first neural network (i.e., the generation neural network) can be architected as a Long Short-Term Memory (LSTM) model which is a type of Recurrent Neural Network (RNN).).
Regarding Claims 5 and 18, Streit teaches: The computer-implemented method of claim 1, wherein the time-dependent vocal characteristic is selected from a group consisting of intensity, tone, and change in intensity (voice biometrics) (See rejection of claim 1 and [065] In one example, the system includes a training threshold specifying how many training samples to generate from a given or received biometric. [0083] In one example, the system is configured to require a five second voice signal for processing, and the system can be configured to select the random biometric instances accordingly. Other thresholds can be used (e.g., one, two, three, four, six, seven, eight, nine seconds or fractions thereof, among other examples), each having respective random selections that are associated with a threshold period of input. [0103] In another example, helper network processing can include voice input segmentation to acquire voice samples using a sliding time (e.g., 10 ms) window across, for example, one second of input. [0121] In one example, the system is configured to capture voice instances based on a sliding 10 ms window that can be captured across one second of voice input, which enables the system to reach or exceed 50 samples.).
[0234] For audio based biometrics different background noises can be introduced, different words can be used, different samples from the same vocal biometric can be used in the training set, among other options.).
Regarding Claim 6, Streit teaches: The computer-implemented method of claim 1, further comprising pre-determining the threshold value based on audio training data (See rejection of claim 1 and [065] In one example, the system includes a training threshold specifying how many training samples to generate from a given or received biometric. [0083] In one example, the system is configured to require a five second voice signal for processing, and the system can be configured to select the random biometric instances accordingly. Other thresholds can be used (e.g., one, two, three, four, six, seven, eight, nine seconds or fractions thereof, among other examples), each having respective random selections that are associated with a threshold period of input. [0103] In another example, helper network processing can include voice input segmentation to acquire voice samples using a sliding time (e.g., 10 ms) window across, for example, one second of input. [0121] In one example, the system is configured to capture voice instances based on a sliding 10 ms window that can be captured across one second of voice input, which enables the system to reach or exceed 50 samples.).
Regarding Claim 7, Streit teaches: The computer-implemented method of claim 6, wherein the pre-determination of the threshold value comprises identifying one or more local minima of time-dependent vocal characteristic values in the audio training data (See rejection of claim 6 and [065] In one example, the system includes a training threshold specifying how many training samples to generate from a given or received biometric. In one example, the system includes a training threshold specifying how many training samples to generate from a given or received biometric. [0083] In one example, the system is configured to require a five second voice signal for processing, and the system can be configured to select the random biometric instances accordingly. Other thresholds can be used (e.g., one, two, three, four, six, seven, eight, nine seconds or fractions thereof, among other examples), each having respective random selections that are associated with a threshold period of input. [0103] In another example, helper network processing can include voice input segmentation to acquire voice samples using a sliding time (e.g., 10 ms) window across, for example, one second of input. [0121] In one example, the system is configured to capture voice instances based on a sliding 10 ms window that can be captured across one second of voice input, which enables the system to reach or exceed 50 samples.).
Regarding Claim 8, Streit teaches: The computer-implemented method of claim 7, wherein the identified one or more local minima comprise candidate threshold values and the pre-determination of the threshold value comprises performing statistical analysis on the candidate threshold values(See rejection of claim 6 and [065] In one example, the system includes a training threshold specifying how many training samples to generate from a given or received biometric. [0083] In one example, the system is configured to require a five second voice signal for processing, and the system can be configured to select the random biometric instances accordingly. Other thresholds can be used (e.g., one, two, three, four, six, seven, eight, nine seconds or fractions thereof, among other examples), each having respective random selections that are associated with a threshold period of input. [0103] In another example, helper network processing can include voice input segmentation to acquire voice samples using a sliding time (e.g., 10 ms) window across, for example, one second of input. [0121] In one example, the system is configured to capture voice instances based on a sliding 10 ms window that can be captured across one second of voice input, which enables the system to reach or exceed 50 samples.).
Regarding Claim 9, Streit teaches: The computer-implemented method of claim 6, wherein the pre-determining comprises: performing frequency distribution analysis of recorded vocal characteristics from the audio training data and selecting a bin from the frequency distribution analysis (a voice fast fourier transformation) with a lowest number of values (down sample) as a basis for the threshold value (See rejection of claim 6 and [0103] In another example, helper network processing can include voice input segmentation to acquire voice samples using a sliding time (e.g., 10 ms) window across, for example, one second of input. In some embodiments, processing of voice data includes pulse code modulation transformation that down samples each time segment to 2× the frequency range, which may be coupled with voice fast fourier transforms to convert the signal from the time domain to the frequency domain.[0135] According to one example, input voice data is transformed based on voice pulse code modulation (PCM). Processing of the audio data can include capturing samples of time segments from the audio information. In one example, silence is removed from the audio information and PCM is executed against one second samples from the remaining audio data. In other embodiments, different sample sizes can be used to achieve a minimum number of authentication instances for enrollment and/or prediction. According to some embodiments, the PCM operation is configured to down sample the audio information to two times the frequency range. In other embodiments different down sampling frequencies can be used. Once PCM is complete at 1702, process 1700 continues at 1704 with a fourier transformation of the PCM signal from the time domain to the frequency domain. According to some embodiments, a voice fast fourier transformation operation is executed at 1704 to produce the frequency domain output.).
Regarding Claim 10, Streit teaches: The computer-implemented method of claim 1, wherein the time-dependent vocal characteristic is intensity and the threshold value is greater than 0 decibels (See rejection of claim 1 and [0135] According to one example, input voice data is transformed based on voice pulse code modulation (PCM). Processing of the audio data can include capturing samples of time segments from the audio information. In one example, silence is removed from the audio information and PCM is executed against one second samples from the remaining audio data. [0187] The system generates random text that is selected to take roughly 5 seconds to speak (in whatever language the user prefers—and with other example threshold minimum periods). The user reads the text and the system (e.g., implemented as a private biometrics cloud service or component) then captures the audio and performs a speech to text process, comparing the pronounced text to the requested text. The system allows, for example, a private biometric component to assert the liveness of the requestor for authentication.).
Regarding Claim 11, Streit teaches: The computer-implemented method of claim 1, wherein the first time point corresponds to an internal portion of a spoken language cluster whose audio is captured within the first chunk (See rejection of claim 1 and [065] In one example, the system includes a training threshold specifying how many training samples to generate from a given or received biometric. [0083] In one example, the system is configured to require a five second voice signal for processing, and the system can be configured to select the random biometric instances accordingly. Other thresholds can be used (e.g., one, two, three, four, six, seven, eight, nine seconds or fractions thereof, among other examples), each having respective random selections that are associated with a threshold period of input. [0103] In another example, helper network processing can include voice input segmentation to acquire voice samples using a sliding time (e.g., 10 ms) window across, for example, one second of input. [0121] In one example, the system is configured to capture voice instances based on a sliding 10 ms window that can be captured across one second of voice input, which enables the system to reach or exceed 50 samples. [0135] According to one example, input voice data is transformed based on voice pulse code modulation (PCM). Processing of the audio data can include capturing samples of time segments from the audio information. In one example, silence is removed from the audio information and PCM is executed against one second samples from the remaining audio data. [0187] The system generates random text that is selected to take roughly 5 seconds to speak (in whatever language the user prefers—and with other example threshold minimum periods). The user reads the text and the system (e.g., implemented as a private biometrics cloud service or component) then captures the audio and performs a speech to text process, comparing the pronounced text to the requested text. The system allows, for example, a private biometric component to assert the liveness of the requestor for authentication.).
Regarding Claim 12, Streit teaches: The computer-implemented method of claim 10, wherein the first time point corresponds to a word boundary of the spoken language cluster(See rejection of claim 1 and [065] In one example, the system includes a training threshold specifying how many training samples to generate from a given or received biometric. [0083] In one example, the system is configured to require a five second voice signal for processing, and the system can be configured to select the random biometric instances accordingly. Other thresholds can be used (e.g., one, two, three, four, six, seven, eight, nine seconds or fractions thereof, among other examples), each having respective random selections that are associated with a threshold period of input. [0103] In another example, helper network processing can include voice input segmentation to acquire voice samples using a sliding time (e.g., 10 ms) window across, for example, one second of input. [0121] In one example, the system is configured to capture voice instances based on a sliding 10 ms window that can be captured across one second of voice input, which enables the system to reach or exceed 50 samples. [0187] The system generates random text that is selected to take roughly 5 seconds to speak (in whatever language the user prefers—and with other example threshold minimum periods). The user reads the text and the system (e.g., implemented as a private biometrics cloud service or component) then captures the audio and performs a speech to text process, comparing the pronounced text to the requested text. The system allows, for example, a private biometric component to assert the liveness of the requestor for authentication.).
Regarding Claim 13, Streit teaches: The computer-implemented method of claim 1, further comprising receiving text data from the speech recognition machine learning model in response to sending the first chunk of the audio data to the speech recognition machine learning model, the text data comprising a prediction of the speech recognition machine learning model of a word whose audio is captured within the first chunk (See rejection of Claim 1 and [0185] The first algorithm (e.g., liveness) performs a speech to text function to compare the pronounced text to the requested text (e.g., random words) to verify that the words were read correctly, and the second algorithm uses a prediction function (e.g., a prediction application programming interface (API)) to perform a one-to-many (1:N) identification on a private voice biometric to ensure that the input correctly identifies the expected person. [0187] The system generates random text that is selected to take roughly 5 seconds to speak (in whatever language the user prefers—and with other example threshold minimum periods). The user reads the text and the system (e.g., implemented as a private biometrics cloud service or component) then captures the audio and performs a speech to text process, comparing the pronounced text to the requested text. The system allows, for example, a private biometric component to assert the liveness of the requestor for authentication. In conjunction with liveness, the system compares the random text voice input and performs an identity assertion on the same input to ensure the voice that spoke the random words matches the user's identity.).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The prior art of records LI eta al. (CN 108172229 A ) teaches: A Speech Recognition-based Authentication And Reliable Control Method.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD K ISLAM whose telephone number is (571)270-5878. The examiner can normally be reached Monday -Friday, EST (IFP).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras Shah can be reached at 571-270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMMAD K ISLAM/Primary Examiner, Art Unit 2653