Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under, including the fee set forth in 37 CFR1.17(e), was filed in this application after final rejection. Since this application is eligiblefor continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e)has been timely paid, the finality of the previous Office action has been withdrawnpursuant to 37 CFR 1.114. Applicant's submission filed on 12/12/2025 has been entered.
Status of the Claims
Claims 1-5, 7-15, and 17-20 are pending.
Response to Applicant’s Arguments
In response to “Thus, when taking the Examiner's characterization that Mengibar's word sequences Wp, Wq, Wr for the base language model 110 correspond to the claimed domain-independent baseline model component and Mengibar's certain word sequences Wx, Wy for the customized language model 114 correspond to the claimed domain- specific model component, one having ordinary skill in the art immediately recognized that P(Wx, Wy) is not a "feature" but a pre-existing probability value for an n-gram (e.g., word sequence) already contained within the base language model 110. In other words, the base language model 110 disclosed by Mengibar already includes a probability value P(Wx, Wy) for that word sequence, and in the examples provided at Paragraph [0023] of Mengibar, P(Wx, Wy) is low because the word sequence (e.g., a specialty restaurant name) is popular only in a specific geographic area and not among the general public. Accordingly, as Mengibar merely changes a probability value P(Wx, Wy) that is part of the original, comprehensive set of n-grams in the base language model 110, any set of n-grams (e.g., word sequences) in the base language model 110 is identical to the set of n- grams in the customized language model 114” and “Accordingly, modifying Biadsy to generate his language model 150 as a domain-specific language model from Mengibar's base language model 110 to preserve probability values (i.e., structure) from the base language model of the words likely being spoken would result Biadsy's language model 150 at best including a domain-independent baseline model component and a domain-specific model component that share a same idnetifical set of n-grams”.
In Biadsy, a language model 150 determines a posterior probability of a current word given information about the linguistic context (e.g., prior words “let’s meet at”) and non-linguistic context (e.g., location, device state, application, user characteristics), the language model 150 was trained using linguistic features 210 and non-linguistic features 220 (Biadsy, ¶41).
Here, the trained language model 150 includes a set of internal weights representing the training state of the language model indicating how various aspects of context make words more or less likely to occur (Biadsy, ¶42). In one example, when feature scores 145 indicate location of user 102, the weights within the language model 150 increases the likelihood for words frequently spoken at the user’s location and decreases the likelihood for words infrequently spoken or not spoken at the user’s location (Biadsy, ¶45).
Like the internal weights of trained model 150 in Biadsy, Mengibar teaches a customized language model 114 generated from a base language model 110 by adjusting the base language model 110 based on adjustment factors, the adjustment factors are information items separate from the base language model 110 that can affect the likelihood that speech input is converted to particular text based on, for example, location (Mengibar, ¶25).
Just as the language model 150’s internal weights increased the likelihood for words frequently spoken at the user’s location in Biadsy, the adjustment factors of customized language model 114 increase the probability value P(Wx, Wy) such that the probability value P(Wx, Wy) in the customized language model 114 is higher than the probability value P(Wx, Wy) in the base language model 110 when considering adjustment factor corresponding to a particular location condition is satisfied (Mengibar, ¶26).
In other words, the internal weights in Biadsy and the adjustment factors in Mengibar correspond to the “feature” such that Mengibar demonstrated:
given the structure of base language model having features P(Wp, Wq, Wr) > P(Wx, Wy)
the structure of the customized language model having features P(Wp, Wq, Wr) < µ * P(Wx, Wy), µ being the adjustment factor (Mengibar, ¶29).
Here, the customized language model preserved the base language model as a separate model structure within for at least P(Wp, Wq, Wr) and that adjustment factor multiplier is separately applied to base language model structure P(Wx, Wy) to generate the customized language model structure µ * P(Wx, Wy).
Say it another way, when applying the customized language model to speech input of a user in geographic region ABC to generate corresponding text string, the resulting list in which (Wx, Wy) is placed higher than (Wp, Wq, Wr) because base language model component P(Wx, Wy) (i.e., score of candidate transcription of the utterance using the domain independent baseline model component ) in the customized language model has been adjusted by geographic region ABC specific adjustment factor µ * P(Wx, Wy) (i.e., adjusting the score of the candidate transcription using the domain specific model component).
Therefore, Mengibar demonstrated that a trained / customized language model comprises at least domain independent (i.e., location independent) baseline model component characterized by P(Wx, Wy) and P(Wp, Wq, Wr), and domain dependent (i.e., location specific) model component characterized by µ * P(Wx, Wy) with location specific adjustment factor µ multiplying the P(Wx, Wy) to increase the probability value P(Wx, Wy) to be higher than P(Wp, Wq, Wr).
In response to “Therefore, Applicant respectfully submits that the alleged combination fails to teach or suggest a domain- independent baseline model component comprising a first set of n-gram features from a general corpus that is not labeled with non-linguistic context, and a domain-specific model component comprising a second, different and smaller set of n-gram features selected from a domain-specific corpus, as recited in independent claims 1 and 11”.
In view of this amendment to claims 1 and 11, rejection under Biadsy and Mengibar has been reconsidered. Upon further search and consideration, please see details of a new combination set forth below.
Non-Statutory Double Patenting
The non-statutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. See In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent is shown to be commonly owned with this application. See 37 CFR 1.131(c). A registered attorney or agent of record may sign a terminal disclaimer.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1, 3, 5, 7-11, 13, 15, and 17-20 are rejected on the ground of non-statutory double patenting of the claims in U.S. Patent No. 11875789 B2 in view of Moore et al. (US 2013/0018650 A1).
18/391781 US 11875789 B2
1. A computer-implemented method executed on data processing hardware that causes the data processing hardware to perform operations comprising:
after training a language model comprising a domain-independent baseline model component and a domain specific model component:
receiving an utterance comprising a non-linguistic context; and
obtaining a candidate transcription of the utterance; and
using the trained language model comprising the domain-independent baseline model component and the domain-specific model component selected based on the non-linguistic context corresponding to a particular domain: determining a score of the candidate transcription of the utterance using the domain-independent baseline component of the trained language model;
adjusting the score of the candidate transcription using the domain-specific model component of the trained language model; and determining a transcription for the utterance based on the adjusted score.
1. A computer-implemented method executed on data processing hardware that causes the data processing hardware to perform operations comprising:
obtaining a plurality of training language examples for training a language model to recognize speech in a particular domain represented by a combination of multiple different aspects of non-linguistic context, wherein: each training language example occurs in one or more of the multiple different aspects of non-linguistic context representing the particular domain; and the language model comprises: a baseline model component; and multiple domain-specific model components each corresponding to a respective different aspect of non-linguistic context from the multiple different aspects of non-linguistic context representing the particular domain;
training, using the plurality of training language examples, the language model by updating corresponding weights of the multiple domain-specific model components;
obtaining an utterance comprising a non-linguistic context; and
determining a transcription of the utterance using the language model by: determining a score of a candidate transcription of the utterance using the baseline model component;
adjusting the score of the candidate transcription using at least one domain-specific model component of the multiple domain-specific model components of the language model, wherein the at least one domain-specific model component is selected based on the non-linguistic context; and
determining the transcription for the utterance based on the adjusted score.
3. The method of claim 1, wherein the baseline model component is domain independent.
11. A system comprising:
data processing hardware; and
memory hardware in communication with the data processing hardware and storing instructions that when executed by the data processing hardware cause the data processing hardware to perform operations comprising:
after training a language model comprising a domain-independent baseline model component and a domain specific model component:
receiving an utterance comprising a non-linguistic context; and
obtaining a candidate transcription of the utterance; and using the trained language model comprising the domain-independent baseline model component and the domain-specific model component selected based on the non-linguistic context corresponding to a particular domain: determining a score of the candidate transcription of the utterance using the domain-independent baseline model component of the trained language model;
adjusting the score of the candidate transcription using the domain-specific model component of the trained language model; and determining a transcription for the utterance based on the adjusted score.
9. A system comprising:
data processing hardware; and
memory hardware in communication with the data processing hardware and storing instructions that when executed by the data processing hardware cause the data processing hardware to perform operations comprising:
obtaining a plurality of training language examples for training a language model to recognize speech in a particular domain represented by a combination of multiple different aspects of non-linguistic context, wherein: each training language example occurs in one or more of the multiple different aspects of non-linguistic context representing the particular domain; and the language model comprises: a baseline model component; and multiple domain-specific model components each corresponding to a respective different aspect of non-linguistic context from the multiple different aspects of non-linguistic context representing the particular domain;
training, using the plurality of training language examples, the language model by updating corresponding weights of the multiple domain-specific model components;
obtaining an utterance comprising a non-linguistic context; and
determining a transcription of the utterance using the language model by: determining a score of a candidate transcription of the utterance using the baseline model component;
adjusting the score of the candidate transcription using at least one domain-specific model component of the multiple domain-specific model components of the language model, wherein the at least one domain-specific model component is selected based on the non-linguistic context; and
determining the transcription for the utterance based on the adjusted score.
11. The system of claim 9, wherein the baseline model component is domain independent.
Claims 1 and 11 do not disclose wherein the domain-independent baseline model component comprises a first set of n-gram features from a general corpus that is not labeled with non-linguistic context, and the domain-specific model component comprises a second, different and smaller set of n-gram features selected from a domain-specific corpus.
Moore teaches training n-gram language models (¶29) where domain independent baseline language model comprising a first set of n-gram features from a general corpus that is not labeled with non-linguistic context (¶2, general purpose language models are not trained on domain specific data (i.e., non-domain specific data and therefore not labeled with any domain context)) and domain specific model comprises a second different and smaller set of n-gram features selected from a domain specific corpus (¶3 and ¶14, using smaller in-domain training dataset to train in-domain language model; i.e., n-gram language model trained with smaller in-domain training dataset would have smaller set of n-gram features as fewer resources are needed to define the smaller in-domain training dataset than those used from the large amount of non-domain-specific training data).
It would’ve been obvious to one ordinarily skilled in the art before the effective filing date of the invention to implement the domain-independent baseline model component comprises a first set of n-gram features from a general corpus that is not labeled with non-linguistic context, and the domain-specific model component comprises a second, different and smaller set of n-gram features selected from a domain-specific corpus in order to make the language model more accurate with training data well matched to the desired application (Moore, ¶12).
The limitations of claims 3 and 13 correspond to claims 1 and 9 of US 11875789 B2.
The limitations of claims 5 and 15 correspond to claims 5 and 13 of US 11875789 B2.
The limitations of claims 7 and 17 correspond to claims 4 and 12 of US 11875789 B2.
The limitations of claims 8 and 18 correspond to claims 6 and 14 of US 11875789 B2.
The limitations of claims 9 and 19 correspond to claims 7 and 15 of US 11875789 B2.
The limitations of claims 10 and 20 correspond to claims 8 and 16 of US 11875789 B2.
Claims 2, 4, 12, and 14 are rejected on the ground of non-statutory double patenting of the claims in U.S. Patent No. 11875789 B2 in view of Moore et al. (US 2013/0018650 A1) as applied to claims 1 and 11, in further view of Casado et al. (US 8862467 B1).
Regarding Claims 2, 4, 12, and 14, claims 1 and 9 of US 11875789 B2 discloses wherein the aspect of non-linguistic context representing the particular domain.
Claims of US 11875789 B2 do not recite the non-linguistic context representing the particular domain comprises an application executing on a user device that captured the utterance.
Casado teaches a computer system for transcribing spoken input from a user of a computing device (Col 1, Rows 48-50) based on context information (Col 1, Rows 51-60) wherein the context information comprises an application executing on the computing device that captured the spoken input (Col 2, Rows 32-33).
It would’ve been obvious to one ordinarily skilled in the art before the effective filing date of the invention to determine a transcription for an utterance based on non-linguistic context representing a particular domain comprising an application executing on a user device that captured the utterance to improve the accuracy of speech recognition so that the utterance can be transcribed to most likely match the speech that a user likely intended (Casado, Col 1, Rows 42-47).
Claim Rejections - 35 USC § 103
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 103 that form the basis for the rejections under this section made in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 7-15, and 17-20 are rejected under 35 USC 103(a) as being unpatentable over Biadsy et al. (US 2015/0228279 A1) in view of Mengibar et al. (US 2013/0346077 A1) and Moore et al. (US 2013/0018650 A1).
Regarding Claims 1 and 11, Biadsy discloses a system (¶20, a system that receives audio indicative of utterance and context data indicating non-linguistic context of the utterance to determine a transcription for the utterance) comprising:
data processing hardware (¶100, processors for execution of computer programs stored in semiconductor memory device); and
memory hardware in communication with the data processing hardware and storing instructions that when executed by the data processing hardware cause the data processing hardware (¶97, computer program instructions encoded on a computer readable medium for execution) to perform operations comprising:
after training a language model comprising a domain-specific model component (¶41, provide feature scores 145 as input to a language model 150 that has been trained to estimate the likelihood of a word or phrase occurring based on scores for linguistic and/or non-linguistic features; the features used to train the language model 150 can be the same linguistic features and non-linguistic features corresponding to the feature scores 145):
receiving an utterance (¶21 and ¶23, client device 110 collects information such as audio data 112 for utterance 104 and sends information to computing system 120) comprising a non-linguistic context (¶25, client device 110 determines and sends non-linguistic context data 116 comprising an identifier for an application running on client device 110);
obtaining a candidate transcription of the utterance (¶28, speech recognizer module 130 identifies candidate transcriptions 135); and
using the trained language model comprising the domain-specific model component selected based on the non-linguistic context corresponding to a particular domain (¶35, feature scores 145 include a set of scores for non-linguistic features 220 such as application features 222, location features 224, and user features 226; ¶41, provide the feature scores 145 as input to a language model 150 to provide a set of output values 155 indicating the likelihoods that one or more words will occur in the current context, the language model 150 trained to estimate the likelihood of a word or phrase occurring based on scores for the non-linguistic features):
determining a score of a candidate transcription of the utterance (¶¶46-47, the language model 150 outputs score values 155 comprising a score for each of multiple words);
adjusting the score of the candidate transcription using the domain-specific model component of the trained language model (¶49, a rescoring module 160 uses the output values 155 from language model 150 to determine a score 165 indicating a likelihood of occurrence of each candidate transcription 135 as a whole by combining scores 155 from the language model 150 for individual words); and
determining a transcription for the utterance based on the adjusted score (¶50, computing system 120 selects a transcription for utterance 104 based on scores 165).
Biadsy does not teach the language model comprises a domain-independent baseline model component and determining the score of the candidate transcription of the utterance using the domain-independent baseline model component of the language model.
Mengibar teaches after training a language model (¶32, a system implementing a dynamic language model by obtaining a base language model and building the base language model from search logs 204 using publicly available language modeling technologies; ¶61, the system customizes the base language model and stores the customized language model before query time) comprising a domain-independent baseline model component (¶22 and ¶24, base language model component comprising P(Wp, Wq, Wr)…P(Wx, Wy)) and a domain-specific model component (¶26, apply an adjustment factor / multiplier ( lets say µ) to P(Wx, Wy) in order to increase the P(Wx, Wy) to µ * P(Wx, Wy) that is higher than P(Wx, Wy) in the base language model; i.e., µ * P(Wx, Wy) is the domain specific model component):
receiving an utterance comprising a non-linguistic context (¶61, when the system receives a speech input from user A, the system can identify the language model adjustment rules 404 from user A based on an identifier of user A; e.g., ¶33, the system uses geographic language model rules to customize the base language model);
obtaining a candidate transcription of the utterance (¶29, in one example determine that the user providing the speech input is located in geographic region ABC, apply customized language model in speech recognition to generate a text string from the speech input); and
using the trained language model comprising the domain independent baseline model component and the domain-specific model component selected based on the non-linguistic context corresponding to a particular domain (¶¶64-65, at query time, the system receives a speech input and a context of the speech input to select a customized language model):
determining a score of the candidate transcription of the utterance using the domain independent baseline model component of the trained language model (¶26 and ¶29, for a given speech input, generate at least the probability value P(Wp, Wq, Wr) and P(Wx, Wy), which includes an unchanged component P(Wp, Wq, Wr) in the base language model as well as P(Wx, Wy));
adjusting the score of the candidate transcription using the domain-specific model component of the trained language model (¶26, ¶29, and ¶36, applying the adjustment factor (per ¶25, adjustment factors are information items separate from the base language model 110, comparable to weights in Biadsy) in the customized language model to generate probability value µ * P(Wx, Wy) that is higher than baseline component P(Wx, Wy) and higher than baseline component P(Wp, Wq, Wr)); and
determining a transcription for the utterance based on the adjusted score (¶29 and ¶36, convert speech input into a text string and display a list in which (Wx, Wy) is placed higher than (Wp, Wq, Wr)).
It would’ve been obvious to one ordinarily skilled in the art before the effective filing date of the invention to generate language model 150 in Biadsy from a baseline language model to comprise domain independent baseline model component that preserves probability values (i.e., structure) from the baseline language model of the words likely being spoken and domain-specific model component comprising probability values (i.e., structure) adjusted according to non-linguistic context (e.g., location) in order to provide more pertinent text search queries based on a received voice input comprising query context such as location (Mengibar, ¶9 and ¶27).
Biadsy does not disclose wherein the domain-independent baseline model component comprises a first set of n-gram features from a general corpus that is not labeled with non-linguistic context, and the domain-specific model component comprises a second, different and smaller set of n-gram features selected from a domain-specific corpus.
Moore teaches training n-gram language models (¶29) where domain independent baseline language model comprising a first set of n-gram features from a general corpus that is not labeled with non-linguistic context (¶2, general purpose language models are not trained on domain specific data (i.e., non-domain specific data and therefore not labeled with any domain context)) and domain specific model comprises a second different and smaller set of n-gram features selected from a domain specific corpus (¶3 and ¶14, using smaller in-domain training dataset to train in-domain language model; i.e., n-gram language model trained with smaller in-domain training dataset would have smaller set of n-gram features as fewer resources are needed to define the smaller in-domain training dataset than those used from the large amount of non-domain-specific training data).
In the language model of Biadsy as defined by Mengibar comprising domain-independent baseline model component (first set of n-gram (Wp, Wq, Wr) and (Wx, Wy) with corresponding n-gram features: probability values P(Wp, Wq, Wr) and P(Wx, Wy) with P(Wp, Wq, Wr) > P(Wx, Wy) per Mengibar, ¶24) and domain specific model component comprising a second different set of n-gram features (second set of n-gram (Wx, Wy) having different n-gram features: adjustment factor / internal weights or µ with corresponding probability value µ * P(Wx, Wy) with µ * P(Wx, Wy) > P(Wp, Wq, Wr) > P(Wx, Wy) per Mengibar, ¶23 and ¶26, (Wx, Wy) being a specialty / popular for user in domain of geographic area ABC), it would’ve been obvious to one ordinarily skilled in the art before the effective filing date of the invention to implement the domain-independent baseline model component comprises a first set of n-gram features from a general corpus that is not labeled with non-linguistic context, and the domain-specific model component comprises a second, different and smaller set of n-gram features selected from a domain-specific corpus in order to make the language model more accurate with training data well matched to the desired application (Moore, ¶12).
Regarding Claims 2 and 12, Biadsy discloses wherein the non-linguistic context represents the particular domain comprises an application executing on a user device (¶26, client device 110 provides data indicating the active application as non-linguistic context data 116; compare Mengibar, ¶46, geographic location of a user device from which a speech input is received is a non-linguistic context data).
Regarding Claims 3 and 13, Biadsy discloses wherein the operations further comprise selecting the domain-specific model component of the trained language model used to adjust the score of the candidate transcription from a plurality of domain specific model components of the trained language model based on the non-linguistic context (¶45, various weights or other parameters (i.e., domain specific model component) within the language model 150 can be set to indicate the impact that various feature scores have on the likelihood of a word occurring; i.e., navigation application specific weights / parameters or navigation domain specific model component include weights indicating users have frequently entered names of locations like “gas station”, “theater”, and “school”; in another example, location specific weights / parameters or location domain specific model component; per ¶46, language model 150 provides output values 155 given the context indicated by feature scores 145; i.e., selecting language model weights / parameters (domain specific model component) based on feature scores 145 to calculate output values 155; see also Mengibar, ¶36, adjustment factor includes a multiplier that increases the probability value P(Wx, Wy) of the word sequence (Wx, Wy) in the base language model to create the customized language model (i.e., domain specific model component corresponding to the adjustment factor)).
Regarding Claims 4 and 14, Biadsy discloses wherein the non-linguistic context comprises an application executing on a user device that captured the utterance (¶45, when the navigation application is used, the likelihood that the language model 150 indicates for words like “theater” may be higher than the likelihood indicated if the user is not using the navigation application).
Regarding Claims 5 and 15, Biadsy discloses wherein the non-linguistic context comprises a location, a time condition (¶25, non-linguistic context indicates factors related to user’s physical environment such as time), a user characteristic, a device characteristic, or a device status (¶41, non-linguistic context includes location, device state, application, user characteristics).
Regarding Claims 7 and 17, Biadsy as modified by Mengibar discloses wherein the baseline model component comprises corresponding weights for a respective set of features (Biadsy, ¶¶33-34, linguistic features comprises n-gram features; Biadsy, ¶¶41-42, linguistic features are used to train language model 150 to include a set of internal weights so the language model is able to estimate likelihoods of words occurring given many different types of linguistic contexts; per Mengibar, ¶22, the base language model includes a probability value P(Wp, Wq, Wr) that is associated with the word sequence (Wp, Wq, Wr)).
Regarding Claims 8 and 18, Biadsy discloses wherein the baseline model component comprises a log-linear model comprising corresponding weights for a corresponding set of features (¶43, a log linear model may be used to combine word n-gram feature scores with feature scores indicating physical environment; per ¶¶41-42, linguistic features 210 are used to train the language model 150 to include the set of internal weights indicating how various aspects of context make words more or less likely to occur; i.e., internal weights indicates how word n-gram feature scores make words more or less likely to occur).
Regarding Claims 9 and 19, Biadsy as modified by Mengibar discloses wherein the corresponding weights of the baseline model component are for features that represent occurrence of n-grams independent of non-linguistic context (Biadsy ¶32, feature scores 145 includes score for each of a set of linguistic features 210; i.e., bigram feature, trigram feature; per Biadsy ¶¶41-42, linguistic features 210 are used to train language model 210 / set of internal weights; this is equivalent to Mengibar, ¶22, the base language model includes a probability value P(Wp, Wq, Wr) that is associated with the word sequence (Wp, Wq, Wr), which is build from publicly available language modeling technologies / toolkit per Mengibar, ¶32).
Regarding Claims 10 and 20, Biadsy discloses wherein the domain-specific model component is a log-linear model that comprises corresponding weights for a corresponding set of features (¶43, language model 150 is a log-linear model that combines word n-gram feature scores with feature scores indicating physical environment, user characteristics, and other factors).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to examiner Richard Z. Zhu whose telephone number is 571-270-1587 or examiner’s supervisor Hai Phan whose telephone number is 571-272-6338. Examiner Richard Zhu can normally be reached on M-Th, 0730:1700.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RICHARD Z ZHU/Primary Examiner, Art Unit 2654 01/27/2026