DETAILED ACTION
This action is responsive to the Request for Continued Examination filed on 01/02/2026. Claims 1, 9, and 16 have been amended. Claims 1-20 are pending in the case. Claims 1, 9, and 16 are independent claims.
Claim Objections
Claims 1-20 are objected to because of the following informalities:
Claim 1:
Line 5 recites “computer devices the status comprising connection state” where “computer devices, the status comprising a connection state” was apparently intended.
Line 17 recites “the at least one target applications” where “the at least one target application” was apparently intended.
Claim 9:
Line 8 recites “connected computer devices the status comprising connection state” where “connected computer devices, the status comprising a connection state” was apparently intended.
Line 23 recites “the at least one target applications” where “the at least one target application” was apparently intended.
Claim 16:
Line 10 recites “connected computer devices the status comprising connection state” where “connected computer devices, the status comprising a connection state” was apparently intended.
Line 25 recites “the at least one target applications” where “the at least one target application” was apparently intended.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. There does not seem to be sufficient support in the original specification for the newly added limitation of independent claims 1, 9, and 16 reciting “the display updating as additional tokens of the user input are received.”
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. The phrase “in near real-time” in claims 1, 9, and 16 is relative, which renders the claim indefinite. The terms "near real-time" are not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. In other words, the notion of something occurring “near” real-time is subjective in nature, as different people could have widely varying ideas of what is sufficiently “near” to “real-time.”
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 9-13, and 16-20 are rejected under 35 U.S.C. § 103 as being unpatentable over Carbune et al. (US Patent Application Pub. No. 2022/0172715 A1, hereinafter “Carbune”) in view of Chen (US Patent Application Pub. No. 2020/0201535, hereinafter “Chen”).
As to independent claims 1, 9, and 16, Carbune shows a computer-implemented method, a computer product, and a concomitant computer system [¶ 12], comprising:
receiving, by one or more computer processors, a user input to a computer user interface [“{…} a user can initialize the automated assistant 304 by providing a verbal, textual, and/or a graphical input to an assistant interface 320 to cause the automated assistant 304 to initialize one or more actions (e.g., provide data, control a peripheral device, access an agent, generate an input and/or an output, etc.). {…}” (¶ 30)
“{…} an operation 406, which can include determining whether an assistant input has been detected. An input to the automated assistant can be, for example, a spoken utterance, a GUI input, and/or any other type of input that can be provided to a computing device. {…}” (¶ 48)];
retrieving, by the one or more computer processors, a status of one or more connected computer devices the status comprising connection state and applications currently executing on the connected devices [“{…} An application state engine of the automated assistant 304 and/or the computing device 302 can access application data 330 to determine one or more actions capable of being performed by one or more applications 334, as well as a state of each application of the one or more applications 334 and/or a state of a respective device that is associated with the computing device 302. A device state engine of the automated assistant 304 and/or the computing device 302 can access device data 332 to determine one or more actions capable of being performed by the computing device 302 and/or one or more devices that are associated with the computing device 302. Furthermore, the application data 330 and/or any other data (e.g., device data 332) can be accessed by the automated assistant 304 to generate contextual data 336, which can characterize a context in which a particular application 334 and/or device is executing, and/or a context in which a particular user is accessing the computing device 302, accessing an application 334, and/or any other device or module.” (¶ 39)];
performing, by the one or more computer processors, a contextual analysis of the user input using one or more natural language processing (NLP) feature representations comprising at least one of word embeddings, sentence embeddings, dependency graphs, or co-reference graphs [e.g. generating “contextual data” about the user input (¶¶ 08, 30, 39-44) using one or more natural language processing (NLP) feature representations comprising at least one of word embeddings, sentence embeddings, dependency graphs, or co-reference graphs (¶¶ 09 & 26-29)];
retrieving, by the one or more computer processors, from a database [e.g. file storage subsystem 526 (¶ 60)], each of one or more previous target applications associated with at least one of previous user inputs matching the user input, wherein the status of the one or more connected computer devices and surrounding device context are used to filter retrievable target applications [the “state of a respective device that is associated with the computing device 302” (¶ 39) and surrounding device context (¶ 39) are used at least in part to filter/choose/narrow down retrievable/available/API-accessible target applications (¶¶ 10, 46, & 49-54). In other words, paragraph 39 shows that the state/status of the one or more connected computer devices and its capability of performing the functionality of the target application is taken into consideration to “filter” (or choose, among all other possible applications) which application is contextually appropriate for the task at hand. The other cited paragraphs (¶¶ 10, 46, & 49-54) show even further examples of how “filtering”/choosing appropriate applications may be reduced to practice.]; predicting, by the one or more computer processors, at least one target application from the retrievable target applications for the user input, based at least in part, on the contextual analysis and analysis of historical user data on target applications associated with a user input [“{…} in response to the spoken utterance, the automated assistant can process audio data characterizing the spoken utterance in order to identify a conversation and/or application that is most related to the spoken utterance. {…}” (¶ 07)
“As an example, in response to the spoken utterance, “Assistant, tell Luke that ‘I like the choreography in this video,’” the automated assistant can determine whether an existing conversation and/or application is associated with a person named “Luke.” When the automated assistant identifies one or more conversations and/or applications associated with the person “Luke,” the automated assistant can further identify a particular conversation and/or application. Alternatively, or additionally, the automated assistant can process audio data corresponding to the spoken utterance in order to identify a topic and/or summary of the spoken utterance, and determine a relevance of the spoken utterance to one or more conversations and/or applications. For example, historical interactions between the user and one or more applications can be characterized by application data that can be processed by the automated assistant in order to determine whether there are associations between the spoken utterance and prior interactions. When one or more terms from the spoken utterance are synonymous with an identified topic associated with a particular application, the automated assistant can select the particular application as the targeted application that the user intends to be affected by the spoken utterance.” (¶ 09)
“{…} The operation 404 can include generating interaction data based on the detected interaction. For example, the interaction data can degenerated with prior permission from a user to indicate one or more participants in the interaction, the application in which the interaction was carried out, temporal data associated with the interaction, semantic understanding information, media associated with the interaction, and or any other information that can be associated with an interaction. In some implementations, the interaction data can include an embedding that is generated based on the interaction that was detected. For example, one or more trained machine learning models can be used to process data associated with the detective interaction in order to generate an embedding corresponding to the interaction. The embedding can thereafter be compared to other embeddings in latent space to determine a similarity or relevance to the other embeddings.” (¶ 47) | See also ¶¶ 50 & 65];
dynamically creating a user interface display of the at least one target applications in near real-time as the user input is being typed or spoken, the display updating as additional tokens of the user input are received [e.g. a user interface display of the at least one target application is dynamically created and updated in near real-time as the user input is being typed or spoken as additional tokens/words/commands of the user input are received. In other words, the assistant responds immediately to the current and subsequent requests/commands (¶¶ 04-09, 07, 23, 46, 52-55, & 58)]; and
responsive to identifying that content is being communicated to the external device [Examiner’s Note: Even though the prior art rejection included below does not depend on the following technicality, it is nonetheless respectfully noted that the broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent are not met. Therefore, as currently claimed, functionalities that currently depend on the “responsive to” condition being true may not be narrowing the claims to the extent it may have been intended since, for purposes of prior art analysis, any prior art scenario showing at least one mappable instance wherein the contingency/triggering condition is not met/true would suffice to anticipate or teach these aspects. In other words, without a preceding step of actually identifying “the content,” whatever limitation that relies on the contingency that it should be carried out “responsive to identifying the content” does not hold considerable patentable weight in this claim because for purposes of prior art analysis, any scenario wherein “the content” is not identified would be sufficient to anticipate and/or render obvious the entire clause. See “Contingent Limitations” in MPEP § 2111.04, subsection II and/or MPEP § 2143.03.], displaying the content {…} of the at least one target application in the computer user interface [e.g. displaying at least some content associated with the at least one target application in the computer user interface (¶¶ 04, 07, 20, 23, & 40) in response to identifying that at least some content has been and/or is being communicated to the external device (¶¶ 10, 23, & 44)].
Even though Carbune shows many instances of displaying content of the at least one target application in the computer user interface, Carbune does not appear to explicitly recite “displaying {…} one or more icons of the at least one target application in the computer user interface” as apparently intended. In an analogous art, Chen shows:
displaying the content and one or more icons of the at least one target application in the computer user interface. [e.g. in response to detecting a text input (and identifying that it has been communicated to the corresponding destination device displaying both content and/or one or more icons of the at least one target application in the computer user interface (Chen: ¶¶ 24-26, 31-33, & 38)].
One of ordinary skill in the art, having the teachings of Carbune and Chen before them prior to the effective filing date of the claimed invention, would have been motivated to incorporate Chen’s application icon presenting techniques into Carbune. The rationale for doing so would have been that Carbune already presented “content” in an analogous manner, and presenting application icons specifically would have improved Carbune’s user experience by avoiding having “to navigate to and launch a third party application {which} can result in excess usage of battery life and/or processing resources of the computing device” (Carbune: ¶ 02). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Carbune and Chen (hereinafter, the “Carbune-Chen” combination) in order to obtain the invention as recited in claims 1, 9, and 16.
As to dependent claims 2, 10, and 17, Carbune-Chen further shows:
receiving, by the one or more computer processors, selection by the user of one or more of the at least one target application for the user input [e.g. receiving a user confirmation of the predicted target application (Carbune: ¶¶ 04 & 29)]; and
retrieving, by the one or more computer processors, a user authorization for each of the one or more of the at least one target application for the user input selected by the user [e.g. retrieving a user “permission”/authorization status of the target application (Carbune: ¶¶ 10, 23, & 44)].
As to dependent claims 3, 11, and 18, Carbune-Chen further shows:
sending, by the one or more computer processors, the user input and the one or more of the at least one target application for the user input selected by the user to the database; and creating, by the one or more computer processors, in the database, a knowledge-based corpus of a plurality of the user inputs and a plurality of the one or more of the at least one target application selected by the user for each user input of the plurality of the user inputs [e.g. maintaining a knowledge-based corpus of user inputs/interactions and selected applications for machine learning/future prediction purposes (Carbune: ¶¶ 07-10, 23, & 47-50)].
As to dependent claims 4, 12, and 19, Carbune-Chen further shows:
wherein retrieving, from the database, each of one or more previous target applications selected by the user associated with the at least one of the previous user inputs matching the user input, further comprises: retrieving, by the one of more computer processors, the contextual analysis of the user input; determining, by the one or more computer processors, based at least in part, on the contextual analysis of the user input, the at least one user input in the database matching the user input; determining, by the one or more computer processors, each of the one or more of the at least one target application selected by the user that are associated with the at least one of previous user inputs matching the user input as at least one of the at least one target application for the user input [“{…} in response to the spoken utterance, the automated assistant can process audio data characterizing the spoken utterance in order to identify a conversation and/or application that is most related to the spoken utterance. {…}” (Carbune: ¶ 07)
“As an example, in response to the spoken utterance, “Assistant, tell Luke that ‘I like the choreography in this video,’” the automated assistant can determine whether an existing conversation and/or application is associated with a person named “Luke.” When the automated assistant identifies one or more conversations and/or applications associated with the person “Luke,” the automated assistant can further identify a particular conversation and/or application. Alternatively, or additionally, the automated assistant can process audio data corresponding to the spoken utterance in order to identify a topic and/or summary of the spoken utterance, and determine a relevance of the spoken utterance to one or more conversations and/or applications. For example, historical interactions between the user and one or more applications can be characterized by application data that can be processed by the automated assistant in order to determine whether there are associations between the spoken utterance and prior interactions. When one or more terms from the spoken utterance are synonymous with an identified topic associated with a particular application, the automated assistant can select the particular application as the targeted application that the user intends to be affected by the spoken utterance.” (Carbune: ¶ 09)
“{…} The operation 404 can include generating interaction data based on the detected interaction. For example, the interaction data can degenerated with prior permission from a user to indicate one or more participants in the interaction, the application in which the interaction was carried out, temporal data associated with the interaction, semantic understanding information, media associated with the interaction, and or any other information that can be associated with an interaction. In some implementations, the interaction data can include an embedding that is generated based on the interaction that was detected. For example, one or more trained machine learning models can be used to process data associated with the detective interaction in order to generate an embedding corresponding to the interaction. The embedding can thereafter be compared to other embeddings in latent space to determine a similarity or relevance to the other embeddings.” (Carbune: ¶ 47)]; and
displaying, by the one or more computer processors, the at least one target application associated for the user input [e.g. the at least one target application associated for the user input may be displayed to a user (Carbune: ¶¶ 04 & 07)].
As to dependent claims 5, 13, and 20, Carbune-Chen further shows:
wherein retrieving the status of the one or more connected computer devices includes retrieving, by the one or more computer processors, an authorization to one or more of the at least one target application for the user input [e.g. retrieving a user “permission”/authorization to the target/predicted application for the user input (Carbune: ¶¶ 10, 23, & 44)].
Claims 6-8, 14, and 15 are rejected under 35 U.S.C. § 103 as being unpatentable over Carbune-Chen in further view of Zhao et al. (US Patent Application Pub. No. 2022/0269762, hereinafter “Zhao”).
As to dependent claims 6 and 14, Carbune-Chen further shows:
wherein predicting the at least one target application for the user input, based at least in part, on the database, further comprises: determining, by the one or more computer processors, a user's level of authorization for access to each of the at least one target application; and providing, by the one or more computer processors, authorization [e.g. confirming that the system has been given a proper/sufficient level (at least in a binary sense) of “permission”/authorization before accessing the predicted/target application on the user’s behalf (Carbune: ¶¶ 10, 23, & 44)].
As indicated above, Carbune-Chen shows confirming that the system has been given a proper/sufficient level (at least in a binary sense) of “permission”/authorization before accessing the predicted/target application on the user’s behalf (Carbune: ¶¶ 10, 23, & 44). Nonetheless, Carbune-Chen does not appear to explicitly recite “providing, by the one or more computer processors, authorization credentials associated with each of the at least one target application for the user input that the user's level of authorization provides the user access to” as apparently intended. In an analogous art, Zhao shows:
determining, by the one or more computer processors, a user's level of authorization for access to each of the at least one target application [e.g. a “WeChat” application (Zhao: fig. 2D; ¶¶ 160-162)]; and providing, by the one or more computer processors, authorization credentials associated with each of the at least one target application for the user input that the user's level of authorization provides the user access to [“As shown in FIG. 7E, after the electronic device 100 recognizes, from the voice signal of the user, that the voice instruction is “displaying the payment interface of WeChat”, and determines that the password with the specified quantity of characters that is entered by the user matches the stored password template, the electronic device 100 may unlock the screen and display a payment interface 730 of WeChat.” (Zhao: ¶ 162)
For even further examples of providing authorization credentials (like voiceprint authentication credentials, face authentication credentials, fingerprint authentication credentials, etc. (Zhao: ¶¶ 102 & 114)) associated with each of the at least one target application for the user input that the user's level of authorization provides the user access to, see also Zhao: ¶¶ 160-161, 174, 210, & 244.]
One of ordinary skill in the art, having the teachings of Carbune, Chen, and Zhao before them prior to the effective filing date of the claimed invention, would have been motivated to incorporate Zhao’s techniques of providing authorization credentials associated with each of the at least one target application for the user input that the user's level of authorization provides the user access to into Carbune-Chen. The rationale for doing so would have been to ensure that only authorized users are allowed to access to target applications, which “improves information security of a terminal” (Zhao: ¶ 102) and “simplifies operation steps for voice control over a function or an application on the electronic device by the user, and reduces operation time of the user.” (Zhao: ¶ 174) over Carbune’s preexisting permissions-based teachings. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Carbune, Chen, and Zhao (hereinafter, the “Carbune-Chen-Zhao” combination) in order to obtain the invention as recited in claims 6 and 14.
As to dependent claims 7 and 15, Carbune-Chen-Zhao further shows:
displaying, by the one or more computer processors, each of the at least one target application for the user input that the user's level of authorization provides the user access to [Zhao explicitly shows displaying a target application for the user input (in its example, a “WeChat” application) that the user’s level of authorization provides the user access to (Zhao: ¶¶ 160-162, 174, 210). Carbune supplements this by also showing how each target application for the user input that has proper “permission”/authorization may be displayed to a user (Carbune: ¶¶ 04 & 07).].
As to dependent claim 8, Carbune-Chen-Zhao further shows:
wherein displaying each of the at least one target application for the user input that the user's level of authorization provides the user access to includes preventing, by the one or more computer processors, the display of each of the at least one target application for the user input that the user does not have access to [Zhao shows the active prevention of the display of the at least one target application for the user input that the user does not have access to as long as their attempts at authentication fail (Zhao: ¶¶ 13, 137, & 214). Carbune supplements this by also showing that if the system has not been given a proper/sufficient level (at least in a binary sense) of “permission”/authorization to access the predicted/target application on the user’s behalf, then the application is prevented from being accessed/displayed (Carbune: ¶¶ 10, 23, & 44).].
Response to Arguments
Applicant’s arguments have been fully considered but they are not persuasive. Applicant argues:
“{…} Applicant respectfully asserts that even if paragraphs 10, 23, 39, and 44 of Carbune taught the previously presented claim language of "retrieving, by the one or more computer processors, a status of one or more connected computer devices" - a conclusion to which Applicant does not concede - they certainly do not teach or suggest the amended claim language of "retrieving, by the one or more computer processors, a status of one or more connected computer devices the status comprising connection state and applications currently executing on the connected devices; ...performing, by the one or more computer processors, a contextual analysis of the user input using one or more natural language processing (NLP) feature representations comprising at least one of word embeddings, sentence embeddings, dependency graphs, or co-reference graphs...retrieving, by the one or more computer processors, from a database, each of one or more previous target applications associated with at least one of previous user inputs matching the user input, wherein the status of the one or more connected computer devices and surrounding device context are used to filter retrievable target applications... [and] dynamically creating a user interface display of the at least one target applications in near real-time as the user input is being typed or spoken, the display updating as additional tokens of the user input are received" as required by claim 1 as amended. {…}”
The Office respectfully disagrees and asserts that the cited art reasonably shows: retrieving, by the one or more computer processors, a status of one or more connected computer devices the status comprising connection state and applications currently executing on the connected devices [see the entirety of the cited passage of ¶ 39 above, which describes both a connection state and applications currently executing on the connected devices]; performing, by the one or more computer processors, a contextual analysis of the user input using one or more natural language processing (NLP) feature representations comprising at least one of word embeddings, sentence embeddings, dependency graphs, or co-reference graphs [e.g. generating “contextual data” about the user input (¶¶ 08, 30, 39-44) using one or more natural language processing (NLP) feature representations comprising at least one of word embeddings, sentence embeddings, dependency graphs, or co-reference graphs (¶¶ 09 & 26-29)]; retrieving, by the one or more computer processors, from a database [e.g. file storage subsystem 526 (¶ 60)], each of one or more previous target applications associated with at least one of previous user inputs matching the user input, wherein the status of the one or more connected computer devices and surrounding device context are used to filter retrievable target applications [the “state of a respective device that is associated with the computing device 302” (¶ 39) and surrounding device context (¶ 39) are used at least in part to filter/choose/narrow down retrievable/available/API-accessible target applications (¶¶ 10, 46, & 49-54). In other words, paragraph 39 shows that the state/status of the one or more connected computer devices and its capability of performing the functionality of the target application is taken into consideration to “filter” (or choose, among all other possible applications) which application is contextually appropriate for the task at hand. The other cited paragraphs (¶¶ 10, 46, & 49-54) show even further examples of how “filtering”/choosing appropriate applications may be reduced to practice.]; and dynamically creating a user interface display of the at least one target applications in near real-time as the user input is being typed or spoken, the display updating as additional tokens of the user input are received [e.g. a user interface display of the at least one target application is dynamically created and updated in near real-time as the user input is being typed or spoken as additional tokens/words/commands of the user input are received. In other words, the assistant responds immediately to the current and subsequent requests/commands (¶¶ 04-09, 07, 23, 46, 52-55, & 58)].
Therefore, the Office respectfully asserts that the cited art sufficiently teaches the limitations recited in the amended claims.
Conclusion
It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALVARO R CALDERON IV whose telephone number is (571) 272-1818. The examiner can normally be reached on Monday - Friday (8:30am - 5pm).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached on (571) 272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALVARO R CALDERON IV/
Examiner, Art Unit 2171
/KIEU D VU/Supervisory Patent Examiner, Art Unit 2171