DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 1, 2, 9 and 18-20 are objected to because of the following informalities:
For claim 1, Examiner believes this claim should be amended in the following manner:
One or more processors comprising processing circuitry to:
receive, by an interpreter of an interactive agent platform that supports simultaneous execution of agent actions in different interaction modalities, one or more representations of one or more detected user actions;
generate, based at least on the interpreter executing one or more instruction lines of one or more interaction flows in response to the one or more detected user actions, one or more representations of one or more responsive agent actions; and
cause, based at least on the one or more representations of the one or more responsive agent actions, presentation of a rendering of [[the]] an interactive agent executing the one or more responsive agent actions.
For claim 2, Examiner believes this claim should be amended in the following manner:
The one or more processors of claim 1, wherein the interactive agent platform supports handling the one or more detected user actions independent of executing the one or more responsive agent actions.
For claim 9, Examiner believes this claim should be amended in the following manner:
The one or more processors of claim 1, wherein the one or more processors are comprised in at least one of:
a control system for an autonomous or semi-autonomous machine;
a perception system for an autonomous or semi-autonomous machine;
a system for performing simulation operations;
a system for performing digital twin operations;
a system for performing light transport simulation;
a system for performing collaborative content creation for three-dimensional (3D) assets;
a system for performing deep learning operations;
a system for performing remote operations;
a system for performing real-time streaming;
a system for generating or presenting one or more of augmented reality content, virtual reality content, or mixed reality content;
a system implemented using an edge device;
a system implemented using a robot;
a system for performing conversational artificial intelligence (AI) operations;
a system implementing one or more language models;
a system implementing one or more large language models (LLMs);
a system implementing one or more vision language models (VLMs);
a system implementing one or more multimodal language models;
a system for generating synthetic data;
a system for generating synthetic data using AI;
a system incorporating one or more virtual machines (VMs);
a system implemented at least partially in a data center; or
a system implemented at least partially using cloud computing resources.
For claim 18, Examiner believes this claim should be amended in the following manner:
The system of claim 10, wherein the system is comprised in at least one of:
a control system for an autonomous or semi-autonomous machine;
a perception system for an autonomous or semi-autonomous machine;
a system for performing simulation operations;
a system for performing digital twin operations;
a system for performing light transport simulation;
a system for performing collaborative content creation for three-dimensional (3D) assets;
a system for performing deep learning operations;
a system for performing remote operations;
a system for performing real-time streaming;
a system for generating or presenting one or more of augmented reality content, virtual reality content, or mixed reality content;
a system implemented using an edge device;
a system implemented using a robot;
a system for performing conversational artificial intelligence (AI) operations;
a system implementing one or more language models;
a system implementing one or more large language models (LLMs);
a system implementing one or more vision language models (VLMs);
a system implementing one or more multimodal language models;
a system for generating synthetic data;
a system for generating synthetic data using AI;
a system incorporating one or more virtual machines (VMs);
a system implemented at least partially in a data center; or
a system implemented at least partially using cloud computing resources.
For claim 19, Examiner believes this claim should be amended in the following manner:
A method comprising:
receive, by an interpreter of an interactive agent platform that supports simultaneous execution of agent actions in different interaction modalities, one or more representations of one or more detected user actions; and
generate, based at least on the interactive agent platform executing one or more instruction lines of one or more interaction flows in response to the one or more detected user actions, one or more representations of one or more responsive agent actions.
For claim 20, Examiner believes this claim should be amended in the following manner:
The method of claim 19, wherein the method is performed by at least one of:
a control system for an autonomous or semi-autonomous machine;
a perception system for an autonomous or semi-autonomous machine;
a system for performing simulation operations;
a system for performing digital twin operations;
a system for performing light transport simulation;
a system for performing collaborative content creation for three-dimensional (3D) assets;
a system for performing deep learning operations;
a system for performing remote operations;
a system for performing real-time streaming;
a system for generating or presenting one or more of augmented reality content, virtual reality content, or mixed reality content;
a system implemented using an edge device;
a system implemented using a robot;
a system for performing conversational artificial intelligence (AI) operations;
a system implementing one or more language models;
a system implementing one or more large language models (LLMs);
a system implementing one or more vision language models (VLMs);
a system implementing one or more multimodal language models;
a system for generating synthetic data;
a system for generating synthetic data using AI;
a system incorporating one or more virtual machines (VMs);
a system implemented at least partially in a data center; or
a system implemented at least partially using cloud computing resources.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 2 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
For dependent claim 2, parent claims establishes “agent actions” and “one or more responsive agent actions”. Claim 2 goes on to recite the phrase “the agent actions” and it is unclear and ambiguous to which of the previously established “agent actions” and “one or more responsive agent actions” are being referenced by the phrase “the agent actions”. Examiner has suggested amendments in the claim objections discussed above to resolve the ambiguities.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 2, 8-11 and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Froelich (U.S. Patent Application Publication 2017/0256259 A1) (made of record of the IDS submitted 7/14/2025) in view of Griffin (U.S. Patent Application Publication 2018/0176269 A1) (made of record of the IDS submitted 12/01/2025) and Munro et al. (U.S. Patent Application Publication 2021/0216349 A1, hereinafter “Munro”).
For claim 1, Froelich discloses one or more processors comprising processing circuitry (disclosing a processor implemented with circuitry to perform processing (par. 157)) to: receive, by an interpreter of an interactive agent platform that supports simultaneous execution of agent actions in different interaction modalities, one or more representations of one or more detected user actions (disclosing a cloud platform to implement an intelligent software agent as a bot (par. 4 and 118) where the bot simultaneously executes actions in different interaction modalities such as a first modality of synthetic speech and a second modality of simulated human visual actions such as arm/hand movements, facial expressions and other body language as gestures to accompany the synthetic speech (par. 56 and 110); explaining the platform implements a speech recognition service as an interpreter to receive and interpret a user’s speech and call audio as representations of detected user actions (Fig. 4; par. 56 and 58)); generate, based at least on the interpreter executing one or more instruction lines of one or more interaction flows in response to the one or more detected user actions, one or more representations of one or more responsive agent actions (disclosing the speech recognition service executes sequences of instructions as instruction lines of a conversation flow as an interaction flow in response to the detected user’s speech and call audio (par. 56, 85 and 158-159); explaining the platform generates representations of agent actions for the bot responsive to the detected user’s speech and call audio (par. 56 and 110)); and cause, based at least on the one or more representations of the one or more responsive agent actions, presentation of a rendering of the interactive agent executing the one or more responsive agent actions (disclosing the platform presents a display of video to render the bot executing the representations of the responsive agent actions to perform the simulated visual actions accompanying the synthetic speech (par. 56 and 110)).
Examiner finds Froelich discloses simultaneous execution of agent actions in different interaction modalities for the reasons discussed above. In any case, these limitations are well-known in the art as disclosed in Griffin.
Griffin similarly discloses a system and method for implementing a call agent service with a bot where the bot interprets sequences of instructions of instruction lines (par. 10-12 and 26). Griffin explains the bot may respond to detected user actions with concurrent multimodal content feedback to simultaneously execute agent actions in different interaction modalities of audio, video and text (par. 10 and 20). It follows Froelich may be accordingly modified with the teachings of Griffin to simultaneously execute its agent actions in different interaction modalities.
A person having ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention would find it obvious to modify Froelich with the teachings of Griffin. Griffin is analogous art in dealing with a system and method for implementing a call agent service with a bot where the bot interprets sequences of instructions of instruction lines (par. 10-12 and 26). Griffin discloses its use of concurrent multimodal content responses is advantageous in simultaneously executing agent actions in audio, video and text to provide appropriate feedback to a user for appropriate collaboration (par. 20 and 22). Consequently, a PHOSITA would incorporate the teachings of Griffin into Froelich for simultaneously executing agent actions in audio, video and text to provide appropriate feedback to a user for appropriate collaboration.
Examiner finds Froelich as modified by Griffin discloses a rendering of an interactive agent for the reasons discussed above. In any case, these limitations are well-known in the art as disclosed in Munro.
Munro similarly discloses a system and method for facilitating human-computer inaction between a human and an autonomous agent (par. 1-2). Munro explains it is known to render a display of the autonomous agent within a virtual environment and to similarly render actions performed by the autonomous agent within the virtual environment (par. 83, 85 and 105). It follows Froelich and Griffin may be accordingly modified with the teachings of Munro to present a rendering of its interactive agent executes its one or more responsive agent actions.
A PHOSITA before the effective filing date of the claimed invention would find it obvious to modify Froelich and Griffin with the teachings of Munro. Munro is analogous art in dealing with a system and method for facilitating human-computer inaction between a human and an autonomous agent (par. 1-2). Munro discloses its use of rendering is advantageous in displaying an autonomous agent and corresponding actions within a virtual environment to facilitate appropriate interaction with a user (par. 83, 85 and 105). Consequently, a PHOSITA would incorporate the teachings of Munro into Froelich and Griffin for displaying an autonomous agent and corresponding actions within a virtual environment to facilitate appropriate interaction with a user. Therefore, claim 1 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention.
For claim 2, depending on claim 1, Froelich as modified by Griffin and Munro discloses wherein the interactive agent platform supports handling detected user actions independent of executing the agent actions (Froelich discloses its platform supports a speech recognition service to handle detected user actions independent of an avatar generator for executing the agent actions (Fig. 4; par. 58)).
For claim 8, depending on claim 1, Froelich as modified by Griffin and Munro discloses wherein the interpreter supports tracking one or more active agent or scene actions initiated by the one or more interaction flows and stopping the one or more active agent or scene actions in response to completion of the one or more interaction flows (Froelich discloses its platform tracks responses as active agent actions performed by the bot and stops the responses to produce a final response where the conversation flow has been completed (par. 93-97 and 103)).
For claim 9, depending on claim 1, Froelich as modified by Griffin and Munro discloses wherein the one or more processors are comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing deep learning operations; a system for performing remote operations; a system for performing real-time streaming; a system for generating or presenting one or more of augmented reality content, virtual reality content, or mixed reality content; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational AI operations; a system implementing one or more language models; a system implementing one or more large language models (LLMs); a system implementing one or more vision language models (VLMs); a system implementing one or more multimodal language models; a system for generating synthetic data; a system for generating synthetic data using AI; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources (Froelich discloses its processors are used to implement a control system for an autonomous agent (par. 4, 6 and 157-158)).
For claim 10, Froelich as modified by Griffin and Munro discloses a system comprising the one or more processors of claim 1 (see above as to claim 1).
For claim 11, depending on claim 10, this claim is a combination of the limitations of claim 10 and claim 2. It follows claim 11 is rejected for the same reasons as to claim 10 and claim 2.
For claim 17, depending on claim 10, this claim is a combination of the limitations of claim 10 and claim 8. It follows claim 17 is rejected for the same reasons as to claim 10 and claim 8.
For claim 18, depending on claim 10, this claim is a combination of the limitations of claim 10 and claim 9. It follows claim 18 is rejected for the same reasons as to claim 10 and claim 9.
For claim 19, Froelich as modified by Griffin and Munro discloses a method comprising steps corresponding to functions performed by the one or more processors of claim 1 (see above as to claim 1).
For claim 20, depending on claim 19, this claim is a combination of the limitations of claim 19 and claim 9. It follows claim 20 is rejected for the same reasons as to claim 19 and claim 9.
Claim(s) 3, 7, 12 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Froelich in view of Griffin and Munro further in view of Lee et al. (U.S. Patent Application Publication 2024/0379097 A1, hereinafter “Lee”).
For claim 3, depending on claim 1, Froelich as modified by Griffin and Munro discloses wherein the interactive agent platform supports non-sequential human-machine interactions in the different interaction modalities (Froelich discloses its platform supports non-sequential human-machine interactions in its different interaction modalities where the interactions facilitate a two-way conversation between a user and the bot (par. 4); Griffin similarly discloses a system and method for implementing a call agent service with a bot where the bot interprets sequences of instructions of instruction lines (par. 10-12 and 26); Griffin explains the bot may respond to detected user actions with concurrent multimodal content feedback to simultaneously execute agent actions in different interaction modalities of audio, video and text (par. 10 and 20); and it follows Froelich may be accordingly modified with the teachings of Griffin to simultaneously execute its agent actions in different interaction modalities).
Examiner finds Froelich as modified by Griffin and Munro discloses non-sequential human-machine interactions for the reasons discussed above. In any case, these limitations are well-known in the art as disclosed in Lee.
Lee similarly discloses a system and method to implement human-machine interactions between a user and a chatbot (par. 1-2). Lee explains a user may utter keywords such as “stop for only 30 seconds” to interrupt the human-machine interactions so that the human-machine interactions are non-sequential (par. 39 and 105). It follows Froelich, Griffin and Munro may be accordingly modified with the teachings of Lee to implement its human-machine interactions as non-sequential human-machine interactions.
A PHOSITA before the effective filing date of the claimed invention would find it obvious to modify Froelich, Griffin and Munro with the teachings of Lee. Lee is analogous art in dealing with a system and method to implement human-machine interactions between a user and a chatbot (par. 1-2). Lee discloses its use of keywords is advantageous in enabling a user to interrupt and control a flow of conversation between the user and a chatbot (par. 39 and 105). Consequently, a PHOSITA would incorporate the teachings of Lee into Froelich, Griffin and Munro for enabling a user to interrupt and control a flow of conversation between the user and a chatbot. Therefore, claim 3 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention.
For claim 7, depending on claim 1, Froelich as modified by Griffin, Munro and Lee discloses wherein the interpreter supports one or more keywords that instruct the interpreter to start or stop one or more groups of supported agent actions in the different interaction modalities (Lee similarly discloses a system and method to implement human-machine interactions between a user and a chatbot (par. 1-2); Lee explains a user may utter keywords such as “stop” to stop the human-machine interactions (par. 39); and it follows Froelich, Griffin and Munro may be accordingly modified with the teachings of Lee to implement a keyword to instruct its interpreter to stop one or more groups of supported agent actions in its different interaction modalities).
For claim 12, depending on claim 10, this claim is a combination of the limitations of claim 10 and claim 3. It follows claim 12 is rejected for the same reasons as to claim 10 and claim 3.
For claim 16, depending on claim 10, this claim is a combination of the limitations of claim 10 and claim 7. It follows claim 16 is rejected for the same reasons as to claim 10 and claim 7.
Claim(s) 4 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Froelich in view of Griffin and Munro further in view of Lee further in view of Marzinzik et al. (U.S. Patent Application Publication 2023/0028693 A1, hereinafter “Marzinzik”).
For claim 4, depending on claim 1, Froelich as modified by Griffin, Munro and Lee discloses wherein the interpreter supports one or more keywords that instruct the interpreter to interrupt the one or more interaction flows and wait for one or more specified events before advancing the one or more interaction flows (Lee similarly discloses a system and method to implement human-machine interactions between a user and a chatbot (par. 1-2); Lee explains a user may utter keywords such as “stop for only 30 seconds” to instruct the chatbot to interrupt a conversation and wait for 30 seconds to elapse as an event before advancing the conversation (par. 39 and 105); and it follows Froelich, Griffin and Munro may be accordingly modified with the teachings of Lee to implement keywords to instruct its interpreter to interrupt its one or more interaction flows and wait for one or more specified events before advancing its one or more interaction flows).
Examiner finds Froelich as modified by Griffin, Munro and Lee discloses waiting for one or more specified events before advancing for the reasons discussed above. In any case, these limitations are well-known in the art as disclosed in Marzinzik.
Marzinzik similarly discloses a system and method to implement human-machine interactions between a user and a voicebot (par. 2-4). Marzinzik likewise explains a user may utter words such as “just a moment” to instruct the bot to interrupt a conversation and wait for further caller input as a specified event before advancing the conversation (par. 62-63). It follows Froelich, Griffin, Munro and Lee may be accordingly modified with the teachings of Marzinzik to support keywords that instruct its interpreter to interrupt its interaction flows and wait for one or more specified events before advancing its interaction flows.
A PHOSITA before the effective filing date of the claimed invention would find it obvious to modify Froelich, Griffin, Munro and Lee with the teachings of Marzinzik. Marzinzik is analogous art in dealing with a system and method to implement human-machine interactions between a user and a voicebot (par. 2-4). Marzinzik discloses its use of keywords is advantageous in enabling a user to interrupt and control a bot to wait for further caller input to control a flow of conversation between the user and the bot (par. 62-63). Consequently, a PHOSITA would incorporate the teachings of Marzinzik into Froelich, Griffin, Munro and Lee for enabling a user to interrupt and control a bot to wait for further caller input to control a flow of conversation between the user and the bot. Therefore, claim 4 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention.
For claim 13, depending on claim 10, this claim is a combination of the limitations of claim 10 and claim 4. It follows claim 13 is rejected for the same reasons as to claim 10 and claim 4.
Claim(s) 5 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Froelich in view of Griffin and Munro further in view of Lee further in view of Manoharan et al. (U.S. Patent Application Publication 2020/0143797 A1, hereinafter “Manoharan”).
For claim 5, depending on claim 1, Froelich as modified by Griffin, Munro and Lee does not specifically disclose instructing an agent to trigger one or more specified agent or scene actions and wait for the one or more specified agent or scene actions to finish before advancing.
However, these limitations are well-known in the art as disclosed in Manoharan.
Manoharan similarly discloses a system and method to implement human-machine interactions between a user and a chat bot (par. 2). Manoharan explains a user may instruct the chat bot to complete various tasks through commands so that the chat bot triggers actions corresponding to the tasks and wait for the tasks to be completed before advancing a conversation between the user and the chat bot (par. 21 and 57). It follows Froelich, Griffin, Munro and Lee may be accordingly modified with the teachings of Manoharan to support keywords such as a complete command that instruct its interpreter to trigger one or more specified agent or scene actions and wait for the one or more specified agent or scene actions to finish before advancing.
A PHOSITA before the effective filing date of the claimed invention would find it obvious to modify Froelich, Griffin, Munro and Lee with the teachings of Manoharan. Manoharan is analogous art in dealing with a system and method to implement human-machine interactions between a user and a chat bot (par. 2). Manoharan discloses its use of complete commands is advantageous in enabling a user to request tasks for a bot to appropriately complete before advancing a conversation between the user and the bot (par. 21 and 57). Consequently, a PHOSITA would incorporate the teachings of Manoharan into Froelich, Griffin, Munro and Lee for enabling a user to request tasks for a bot to appropriately complete before advancing a conversation between the user and the bot. Therefore, claim 5 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention.
For claim 14, depending on claim 10, this claim is a combination of the limitations of claim 10 and claim 5. It follows claim 14 is rejected for the same reasons as to claim 10 and claim 5.
Claim(s) 6 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Froelich in view of Griffin and Munro further in view of Lee further in view of Ship et al. (U.S. Patent Application Publication 2022/0229860 A1, hereinafter “Ship”).
For claim 6, depending on claim 1, Froelich as modified by Griffin, Munro and Lee does not specifically disclose instructing an agent to trigger one or more specified agent or scene actions and advance without waiting for the one or more specified agent or scene actions to finish.
However, these limitations are well-known in the art as disclosed in Ship.
Ship similarly discloses a system and method to implement human-machine interactions between a user and a chatbot (par. 2). Ship explains a user may instruct the chatbot to perform specified agent actions and additionally instruct the chatbot with a skip command to cause the chatbot to advance without waiting for a specified agent action to finish (par. 60). It follows Froelich, Griffin, Munro and Lee may be accordingly modified with the teachings of Ship to support keywords such as a skip command that instruct its interpreter to trigger one or more specified agent or scene actions and advance without waiting for the one or more specified agent or scene actions to finish.
A PHOSITA before the effective filing date of the claimed invention would find it obvious to modify Froelich, Griffin, Munro and Lee with the teachings of Ship. Ship is analogous art in dealing with a system and method to implement human-machine interactions between a user and a chatbot (par. 2). Ship discloses its use of skip commands is advantageous in enabling a user to advance a conversation without waiting for a specified action to finish to appropriately control a flow of the conversation (par. 60). Consequently, a PHOSITA would incorporate the teachings of Ship into Froelich, Griffin, Munro and Lee for enabling a user to advance a conversation without waiting for a specified action to finish to appropriately control a flow of the conversation. Therefore, claim 6 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention.
For claim 15, depending on claim 10, this claim is a combination of the limitations of claim 10 and claim 6. It follows claim 15 is rejected for the same reasons as to claim 10 and claim 6.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES TSENG whose telephone number is (571)270-3857. The examiner can normally be reached 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHARLES TSENG/Primary Examiner, Art Unit 2613