DETAILED ACTION
This communication is in response to the Application filed on 05/29/2024. Claims 1-20 are pending and have been examined. Claims 1, 14 and 19 are independent. This Application was published as U.S. Pub. No. 2025/0371278A1.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 05/29/2024 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7, 11 and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Saraf et al., (US Pub No. 2025/0321989, hereinafter, Saraf) in view of Heere et al., (US Pub No. 2021/0089860, hereinafter, Heere).
Regarding Claim 1.
Saraf discloses a computer-implemented method comprising (Saraf, Fig.9, par [127], "…process 900 performed by an extraction engine implemented in a computer program..."):
receiving a transcript of communications (Saraf, par [129], "…at block 910...the extraction engine receives a communication including unstructured data defining an action...");
submitting a prompt to a Large Language Model (LLM), the prompt comprising the transcript and a request for the LLM to identify one or more tasks discussed in the transcript and generate, for each task, a structured output comprising a summary of the task and a next action for the task (Saraf, par [092], "…the module 545 process details of user requests/actions by utilizing language models..."; par [130], "…At block 930...The extraction engine can automatically process the communication by utilizing at least one generative AI model to extract details of the action being performed in the unstructured data...The generative AI model comprises one or more of a generative pre-trained transform (GPT), an AI agent, and a large language models (LLM)...");
receiving a structured output for a first task of the one or more tasks from the LLM in response to the prompt (Saraf, par [132], "…At block 940, the extraction engine automatically converts the action into an activity or a task of a process associated with the communication..."; par [134], "…converting unstructured data from Task Mining and communications mining (i.e., emails, chat, etc.) into structured data...");
mapping the first task to a corresponding first task object stored in a database based at least in part on the structured output for the first task (Saraf, par [133], "…At block 950, the extraction engine processes the activity or the task to match one or more of a plurality of existing automations or robotic process automations (RP As) to the activity or task….");
storing the structured output for the first task in the database in a status update entity linked to the first task object (Saraf, par [044], "…data service (e.g., UiPath Data Service™) may be stored in database 140, for example, and bring data into a single, scalable, secure place with a drag-and-drop storage interface..."; par [078], "…database server 355");
Saraf discloses chatbot (paras [048, 066]), but does not explicitly discloses the interactive GUI and the following limitations. However, Heere, in the analogous field of endeavor, discloses displaying, via a user interface, data from the status update entity as a proposed update to the first task object (Heere, Figs.4-11, par [110], "...the GUI 400 is displayed on a computing device of a user via the front-end application 305..."; paras [110-115], "…the digital assistant system 300 generates a recommendation for addressing the event and prompts the user to select a selectable user interface element 760...");
receiving, via the user interface, an input accepting the proposed update (Heere, par [115], "…At any point during an interaction of a user with the digital assistant system 300, the user can give feedback to the digital assistant system 300... in textual or verbal form, but also through clicking on a feedback buttons or other selectable user interface elements..."); and
updating the first task object with the data from the status update entity in response to the input accepting the proposed update (Heere, Fig.1, par [043], "…Based on the observed data change...the digital assistant system 300 may issue a data event into a suitable outgoing channel (e.g., meeting preparation, approval process, daily business process updates..."; par [029], "…The relational database modules 142 can be utilized to add, delete, update and manage database elements...").
Therefore, it would have been obvious to one of ordinary skill in the art, before effective filing date of the claimed invention, to have modified the extraction engine for task mining out of unstructured data of Saraf with the AI-powered digital assistant system with an interactive GUI of Heere with a reasonable expectation of success to proactively generate notification and recommend actions addressing future event, and employ adequate contextual awareness when presenting content to users (Heere, par [003]).
Regarding Claim 2,
Saraf in view of Heere discloses the method of claim 1.
Saraf further discloses wherein the transcript comprises unstructured communication data regarding the one or more tasks (Saraf, par [130], "…At block 930...The extraction engine can automatically process the communication by utilizing at least one generative AI model to extract details of the action being performed in the unstructured data...")
Regarding Claim 3,
Saraf in view of Heere discloses the method of claim 2,
Saraf further discloses wherein the prompt further comprises a request for the LLM to generate, as part of the structured output for each task, at least one of: a current status of the task; a proposed status change for the task; a start time and an end time of discussion of the task in the transcript; a name of a user to whom the task is assigned; a summary of the discussion of the task; or a summary of action items identified for the task (Saraf, par [129], "…The communication can also include metadata, a timestamp, case information, user information, etc. that can be associated with the details of the request/action during extraction...")
Regarding Claim 4,
Saraf in view of Heere discloses the method of claim 1, further comprising,
Heere further discloses after displaying the status update entity as the proposed update via the user interface (Heere, Fig.6, paras [112-113], "…the digital assistant system 300 displays a visual indication 530 of the user instruction, summary 540 that was requested by the user. The summary 540 may provide more in-depth details regarding the event (i.e., status update entity)..."):
receiving, via the user interface, an input requesting one or more adjustments to the data from the status update entity (Heere, Fig.7, par [113-114], "...prompts the user to select a selectable user interface element 760 to trigger the presentation of the recommendation (i.e., adjustments)..."; "…the digital assistant system 300 may receive user instruction via textual or verbal input as well..."); and
modifying the status update entity based on the requested one or more adjustments; wherein updating the first task object with the data from the status update entity is performed after the modifying of the status update entity (Heere, Fig.7, par [114], "…the digital assistant system 300 displays a visual indication
770 of the user instruction, as well as a recommendation 780 for addressing the event (i.e., modified entity)...").
Regarding Claim 5,
Saraf in view of Heere discloses the method of claim 1, wherein updating the task object with the data from the status update entity comprises updating one or more task attributes of the task object based on the data from the status update entity (Saraf, par [129], "…The communication can also include metadata, a timestamp, case information, user information, etc. that can be associated with the details of the request/action during extraction..."; i.e., the unstructured or structured data extracted from communications/events are construed to inherently contain the same attributes used for the update).
Regarding Claim 6,
Saraf in view of Heere discloses the method of claim 1, wherein the prompt is a first prompt, and wherein mapping the first task to the corresponding task object comprises:
Heere further discloses generating a second prompt for the LLM to search the database for the corresponding task object (Heere, Fig.1, par [024, 29], "…The cross-functional services 132 can include relational database modules to provide support services for access to the database(s) 130, which includes a user interface library 136..."; "…The relational database modules 142 can be utilized to add, delete, update and manage database elements..."),
the second prompt comprising: a list of existing task objects stored in the database (Heere, par [113], "…corresponding selectable user interface elements 752 configured to trigger display of the supporting data 754...");
data attributes of the first task derived from the structured output for the first task (…see claims 3 and 5 regarding data attribute); and
a request to search the list of existing task objects and return either an identifier of a task object having data attributes matching the data attributes of the first task or an empty string if no match is found (Heere, par [057], "…the digital assistant system 300 to access, search and "understand" large quantities of available textual resources and link the extracted information to the topic of the user's request…").
Regarding Claim 7,
Saraf in view of Heere discloses the method of claim 6, wherein the list of existing task objects is a subset of all existing task objects stored in the database, the method further comprising
Heere further discloses generating the list of existing task objects by filtering the existing task objects stored in the database based on at least one of: a creation date; an edit date; a status; a latest comment; a latest status update; or a name of an associated user (Heere, Fig.1, paras [024-029], "…The relational database modules 142 can provide support services for access to the database(s) 130, which includes a user interface library 136… utilize a variety of database technologies including SQL, SQLDBC, Oracle, MySQL, Unicode, JDBC..."; i.e., it is well known to the skilled people in the field that relational database modules organize data in tables (relations), with rows (tuples) representing records and columns (attributes) representing data fields and supports operations like selection, projection, join, union and intersection, enabling powerful data retrieval manipulation).
Regarding Claim 11,
Saraf in view of Heere discloses the method of claim 1, wherein the LLM is a first LLM, the method further comprising:
Heere further discloses prior to submitting the prompt to the first LLM, submitting a prompt to a second LLM, the prompt submitted to the second LLM comprising the transcript and a request for the second LLM to identify personal data in the transcript (Heere, par [047], "…the digital assistant system 300 is configured to provide role-specific and user-specific personalized adaptions and alterations... ranking, prioritization, filtering, and aggregation mechanisms from the realization of the context awareness features of the digital assistant system 300 may be customized for a user…") and
output a sanitized version of the transcript with the personal data removed, wherein the transcript included in the prompt submitted to the first LLM is the sanitized version of the transcript (Heere, par [059], "…The digital assistant system 300 may facilitate the sharing of information artifacts among users and tenants even in the case where confidential or otherwise restricted data is involved through its built-in anonymization Functionality...").
Regarding Claim 13,
Saraf in view of Heere discloses the method of claim 2.
Saraf further discloses wherein the transcript comprises at least one of: a meeting transcript; a transcript of an online chat session between at least two users; or a transcript of an online chat session between a chatbot that incorporates the LLM and a user (par [020], "…the extraction engine performs processes as part of a process mining and discovery suite that access unstructured data ( e.g., data from communications, emails, chats, comments, comments within tickets, etc.), as well as structured data...").
Claim 14 is a system claim with limitations similar to the limitations of Claim 1 and is rejected under similar rationale. Additionally,
Saraf discloses a computing system comprising: at least one hardware processor; at least one memory coupled to the at least one hardware processor, the at least one memory comprising a database storing a plurality of task objects; a large language model (LLM); and one or more non-transitory computer-readable media having stored therein computer-executable instructions that, when executed by the computing system (Saraf, Fig.4, par [087], "…Computing system 500 further includes a memory 515 for storing information and instructions to be executed by processor(s) 510...", "…Non-transitory computer-readable media may be any available media that can be accessed by processor(s) 510 and may include volatile media, non-volatile media, or both...";Fig.3, database 355; par [020], "…The extraction engine includes using generative AI techniques (e.g., generative pre-trained transforms (GPT)) and large language models (LLMs)..."),
…
Rationale for combination is similar to that provided for Claim 1.
Claims 8-10 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Saraf in view of Heere further in view of Penrose et al., (US Pub No. 2025/0258863, hereinafter, Penrose).
Regarding Claim 8,
Saraf in view of Heere discloses the method of claim 7.
But neither Saraf Nor Heere explicitly discloses the embedding vector for the existing task object. However, Penrose, in the analogous field of endeavor, discloses wherein the list of existing task objects is a subset of all existing tasks stored in the database (Penrose, Fig.2, par [078, 108], "…The index module 204 may organize and index the data...generating a searchable database of the legal case files..."; Fig. 6C, par [441-443], "…The entity store 612A may serve as a centralized repository for storing and managing information about various entities identified within the system…a comprehensive database of entities, including their attributes, relationships, and historical data..."), the method further comprising:
for each existing task object in the list of existing objects, pre-creating an embedding vector for the existing task object based at least in part on data attributes of the existing task object (Penrose, Fig.6A, paras [309-313], "…The semantic search module 610B...the semantic search manager 610B-1 and the semantic search database 610B-2, which may work in tandem to process and store data for efficient retrieval..."; "…the semantic search manager 610B-1 may employ advanced natural language processing techniques, potentially including large language models...generate vector embeddings for each piece of content..." );
storing the embedding vectors (Penrose, par [312], "…The semantic search database 610B-2 may store these vector embeddings along with metadata about the original content. It may utilize specialized indexing structures...");
creating a search embedding vector for the first task based on the data attributes of the first task (Penrose, paras [079-082], "…the index module 204 may provide a retrieval augmented generation (RAG) index..."; "…Retrieval Augmented Generation (RAG)...The RAG model operates by first encoding the input prompt and using the encoded representation to retrieve relevant documents or passages from a knowledge base..."); and
generating the list of existing task objects by retrieving a predefined number of the existing tasks stored in the database whose respective embedding vectors are most closely related to the search embedding vector (Penrose, par [078], "…the index module 204 may provide a retrieval augmented generation (RAG) index..."; par [313], "…", "…The semantic search module 610B may provide a RAG (Retrieval-Augmented Generation) chat capability...the system may uses the semantic search capabilities to retrieve relevant information from the database...").
Therefore, it would have been obvious to one of ordinary skill in the art, before effective filing date of the claimed invention, to have modified AI-powered interactive digital assistant system taught by Saraf in view of Heere with the large language model of Penrose with a reasonable expectation of success to efficiently retrieve, analyze, and process large amount of digital files/entities by creating indexed data representation and reducing the time and effort involved with manual review process (Penrose, paras [004-008, 035-040]).
Regarding Claim 9,
Saraf in view of Heere discloses the method of claim 1, further comprising:
Saraf further discloses receiving a structured output for a second task of the one or more tasks from the LLM in response to the prompt (Saraf, par [132], "…At block 940, the extraction engine automatically converts the action into an activity or a task of a process associated with the communication..."; par [134], "…converting unstructured data from Task Mining and communications mining (i.e., emails, chat, etc.) into structured data...");
storing the structured output for the second task in the database in a new task entity (Saraf, par [044], "…data service (e.g., UiPath Data Service™) may be stored in database 140, for example, and bring data into a single, scalable, secure place with a drag-and-drop storage interface..."; par [078], "…database server 355");
displaying, via the user interface, data from the new task entity (Heere, Figs.4-11, par [110], "...the GUI 400 is displayed on a computing device of a user via the front-end application 305..."; paras [110-115], "…the digital assistant system 300 generates a recommendation for addressing the event and prompts the user to select a selectable user interface element 760...");
receiving, via the user interface, an input accepting the data from the new task entity (Heere, par [115], "…At any point during an interaction of a user with the digital assistant system 300, the user can give feedback to the digital assistant system 300... in textual or verbal form, but also through clicking on a feedback buttons or other selectable user interface elements..."); and
But neither Saraf Nor Heere explicitly discloses the determining and creating a new task object. However, Penrose discloses determining that the second task does not correspond to any existing task objects stored in the database; creating a new task object in the database and populating the new task object with the data from the new task entity in response to the input accepting the data from the new task entity (Penrose, Fig.6A, par [523], "… If the user note (i.e., converted structured data with tasks) contains information about a new event, the event resolver 612B may create a new event entry and associate it with relevant entities...").
Rationale for combination is similar to that provided for Claim 8.
Regarding Claim 10,
Saraf in view of Heere further in view of Penrose discloses the method of claim 9, wherein the prompt is a first prompt, the method further comprising, after populating the new task object with the data from the new task entity:
Penrose further discloses submitting a second prompt to the LLM comprising the transcript and a request for the LLM to generate task description text for the new task object (Penrose, Fig.2, paras [091], "...The summary module 210 may produce a narrative summary of the case...provides a concise and comprehensive overview of the case (i.e., task description)…by generating a summary prompt, and passing the summary prompt to the LLM to receive summary response data...");
receiving the task description text from the LLM in response to the second prompt ( par [093], "…The summary prompt may include a context portion and a request portion. The context portion may be populated with data from the index (e.g., RAG results)...The request portion may include default structure (e.g., text asking for textual summation of events) and parameters..."); and
adding the task description text to the new task object (Heere, Fig.1, par [094], "…The summary module 210 may process the summary response (e.g., the summary data in JSON format) and generate user interfaces to display the summary data in an interactive manner...").
Regarding Claim 17,
Saraf in view of Heere discloses the system of claim 14.
But neither Saraf Nor Heere explicitly discloses the limitation, "...a stored representation of a plurality of task groups, wherein each of the plurality of stored task objects is associated with a corresponding task group of the plurality of task groups."
However, Penrose discloses further comprising a stored representation of a plurality of task groups, wherein each of the plurality of stored task objects is associated with a corresponding task group of the plurality of task groups (Penrose, Fig.6A, paras [225-250], par [227], "…The ingest modules 608 may process the incoming data. These modules may include the directory classifier 608A, the text entity identifier 608B, the text classifier 608C..."; par [228-229], "…The directory classifier 608A may play a crucial role in organizing and categorizing incoming data within the system…", "…the directory classifier 608A may decompose complex directory structures into simpler, more manageable sub-objects..."; par [234], "…The text entity identifier 608B may process various types of objects...";paras [242-244], "…For text objects 606C, the text classifier 608C may examine the entire body of text to determine its category…based on the prevalent topics and terminology used..."), and
Heere further discloses wherein the computer-executable instructions comprise computer-executable instructions that, when executed by the computing system, cause the computing system to perform: mapping the transcript to a corresponding task group of the plurality of task groups (Heere, Fig.1, par [024, 29], "…The cross-functional services 132 can include relational database modules to provide support services for access to the database(s) 130, which includes a user interface library 136..."; "…The relational database modules 142 can be utilized to add, delete, update and manage database elements...").
Rationale for combination is similar to that provided for Claim 8.
Regarding Claim 18,
Saraf in view of Heere further in view of Penrose discloses the system of claim 17.
Penrose further discloses the transcript comprises at least one of: a transcript of a meeting regarding the corresponding task group; a transcript of a user-initiated online chat session regarding the corresponding task group; or a transcript of a chatbot-initiated online chat session regarding the corresponding task group (Penrose, par [259], "…in meeting minutes or interview transcripts, it may recognize dialogue structures..."; par [346], "…Text objects 606C: The extractor may extract various text-based data, such as SMS messages, chat logs, notes, or contact information…").
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Saraf in view of Heere further in view of Grillo et al., (US Pub No. 2025/0063140, hereinafter, Grillo).
Regarding Claim 12,
Saraf in view of Heere discloses the method of claim 1.
But neither Saraf nor Heere explicitly discloses the segmentation of the transcript according to the discussed topics. However, Grillo discloses wherein the transcript is a segment of a longer transcript of communications, the method further comprising: prior to receiving the transcript, generating the transcript by applying the LLM to divide the longer transcript into a plurality of segments (Grillo, Fig.2, par [062], "…the smart topic generation system 102 utilizes a context transformer engine 206, a smart topic agent 208, and a large language model 210..."; par [063], "…To generate the smart topic output 212, the smart topic generation system 102 provides the transcript 204 (and/or application data from other computer applications) to the context transformer engine 206, which processes and breaks down the transcript 204..."),
wherein each of the plurality of segments encapsulates a discussion of a respective one of a plurality of different topics (Grillo, par [063], "…the context transformer engine 206 identifies portions of transcript 204 that correspond to or mention particular subject matter to topics...").
Therefore, it would have been obvious to one of ordinary skill in the art, before effective filing date of the claimed invention, to have modified AI-powered interactive digital assistant system taught by Saraf in view of Heere with the smart topic generation system of Grillo with a reasonable expectation of success to efficiently and correctly identify topic-specific portions of the transcript and digital content outside the transcript corresponding to the topic with more efficient interface without requiring excessive navigational inputs (Grillo, paras [002-005]).
Claims 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Saraf in view of Penrose.
Regarding Claim 19,
Penrose discloses one or more non-transitory computer-readable media storing computer-executable instructions, the instructions comprising (Penrose, par [012, 681-682], "…a non-transitory computer-readable medium may store instructions that, when executed by a processor, cause a computer to perform a method for processing and analyzing..."):
first instructions to identify, among a plurality of task objects stored in a database of a software application, a task object associated with an open question (Penrose, Fig.6C, par [434], "…when a user asks a question through the chat interface 624D, question answerer 620A may consult the entity resolution module 612 to identify and disambiguate entities mentioned in the query. The entity store 612A may provide comprehensive information about known entities...");
second instructions to generate a textual description of the task object (Penrose, Fig.2, paras [091], "...The summary module 210 may produce a narrative summary of the case...provides a concise and comprehensive overview of the case (i.e., task description)…by generating a summary prompt, and passing the summary prompt to the LLM to receive summary response data...");
third instructions to initiate a chat regarding the task object (Penrose, Fig.5A, par [570], "…The chat interface 624D may provide a conversational user interface for interacting with the system. This component may allow users to input natural language queries, receive responses, and engage in dialogue-style interactions with the system's AI capabilities...");
fourth instructions to submit one or more prompts comprising the textual description of the task object to a Large Language Model (LLM) incorporated in a chatbot (Penrose, Fig.4, par [066], "…the LLM service 115C or modules of the LLM server 115A that are locally-hosted can also provide a chatbot interface for user interaction with the large language model..."), wherein the one or more prompts cause the chatbot to request an answer to the open question during the chat (Penrose, par [066], "…This allows the user to ask questions and receive textual answers about the case directly on their device...");
fifth instructions to end the chat in response to receiving the answer to the open question (Penrose, Fig.5I, chat interface after receiving the answer; paras [570-580], "…The chat interface 624D may work closely with the chat service 620 to provide users with an interactive and intelligent conversational experience..."; par [573], "…Response Generation...formulate a response based on this information and send it back to the chat
interface 624D for display to the user…");
But, Penrose does not explicitly discloses creation and storage of the structured output.
However, Saraf discloses sixth instructions to create a structured output based on a transcript of the chat (Saraf, par [092], "…the module 545 process details of user requests/actions by utilizing language models..."; par [130], "…At block 930...The extraction engine can automatically process the communication by utilizing at least one generative AI model to extract details of the action being performed in the unstructured data...The generative AI model comprises one or more of a generative pre-trained transform (GPT), an AI agent, and a large language models (LLM)..."); and
seventh instructions to store data from the structured output in a status update entity linked to the task object in the database (Saraf, par [044], "…data service (e.g., UiPath Data Service™) may be stored in database 140, for example, and bring data into a single, scalable, secure place with a drag-and-drop storage interface..."; par [078], "…database server 355").
Therefore, it would have been obvious to one of ordinary skill in the art, before effective filing date of the claimed invention, to have modified the extraction engine for task mining out of unstructured data of Saraf with the large language model with chat service of Penrose with a reasonable expectation of success to efficiently retrieve, analyze, and process large amount of digital files/entities by reducing the time and effort involved with manual review process and providing users with an interactive and intelligent conversational experience (Penrose, paras [004-008, 035-040, 571-580]).
Regarding Claim 20,
Saraf in view of Penrose discloses the computer-readable media of claim 19,
Penrose further discloses wherein the task object is identified by the LLM in response to a prompt comprising a request to identify which of the plurality of task objects require clarification, or wherein the task object is identified in response to a request for clarification of the task object received via a user interface of the software application (Penrose, Fig.7A-H, par [570], "…The chat interface 624D may support features such as context-aware responses, suggestion of follow-up questions, and the ability to refine or expand on previous queries…"; i.e., Fig.7A-H illustrate the GUI interact with the user to provide the clarification with various interactions such the summary, supporting data, and suggestions).
Allowable Subject Matter
Claim 15-16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Schmidt et al., (US Pub No. 2022/0245563) discloses project management systems, and more particularly, to systems that display interactive project tracking information.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JANGWOEN LEE whose telephone number is (703)756-5597. The examiner can normally be reached Monday-Friday 8:00 am - 5:00 pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BHAVESH MEHTA can be reached at (571)272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JANGWOEN LEE/Examiner, Art Unit 2656
/BHAVESH M MEHTA/Supervisory Patent Examiner, Art Unit 2656