DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/12/26 has been entered.
Response to Arguments
Applicant's arguments filed 1/12/26 have been fully considered but they are not persuasive.
Regarding the 35 U.S.C. 101 rejection of the claims, Applicant argues that the claims as amended are not directed to non-statutory subject matter (Arguments, pg. 11). Examiner respectfully disagrees as the claims are directed to data gathering, data analysis, and data transmission steps without significantly more as presented in the rejection below.
Regarding the 35 U.S.C. 103 rejection of the claims 1, 12, 23 and 30 with references Trache and Rofouei, Applicant argues that para. [0065] of Trache describes
determining a profile and template associated with a profile and updating the template and profile after receiving the first natural language input, but that Trache never discloses selecting a profile based on a combination of the user, the user context information and the user prompt, that the office action/advisory action conflates the temporal sequence “after” as described in Trache with causal or decision making criteria and is not equivalent to “based on a combination” as required by the claims, and further that the disclosure of Trache is based on alternative criteria (i.e., based on a user or based on context information) and does not rely on a combined evaluation of the user, the user context information and the user prompt (Arguments, pg. 12-16).
Examiner respectfully disagrees as Trache discloses receiving a natural language user input for a large language model (fig. 3, element 330; para. [0002]-[0005]; para. [0024]-[0025]; para. [0064]), and obtaining contextual (para. [0038]), respectively corresponding to limitations “receiving a user prompt for the LXM” and “obtaining user context information”.
Trache discloses: selecting an appropriate user profile (step 310 of fig. 3) after receiving a natural language user input/prompt from the user (step 330 of fig. 3) and updating the profile after receiving the input/prompt from the user (see para. [0065]) i.e., selecting a profile associated with the user in response to receiving the user prompt/first natural language input from the user, as well as selecting the user profile associated with the user and context information (para. [0005]; para. [0038]).
Therefore, Trache discloses receiving a natural language user input/prompt from a user and automatically selecting a profile associated with the user and context information in response to the user input/prompt and the user input/prompt, and as such, Examiner maintains that Trache discloses “in response to receiving the user prompt, selecting a user profile from among a plurality of user profiles based on a combination of the user, the user context information, and the user prompt”. Also, in response to Applicant’s arguments that Trache performing the profile selection after receiving the user input/user prompt is not equivalent to profile selection based on the user/context, because Trache discloses selecting a user profile as a result of (after/subsequent) the determined user input/prompt, the determined context information and the user, Trache discloses the selected “based on the” the determined information and user. Furthermore, in response to Applicant’s argument that the disclosure of Trache is based on alternative criteria, Examiner respectfully disagrees as Trache explicitly disclose its selected profile as being based on at least the user and/or context information (para. [0005]; para. [0038]).
Absent any argument (Arguments, pg. 15) as to why the cited portions of Trache, Rofouei and additional reference Maurer fail to disclose limitations recited in the dependent claims, Examiner maintains that the rejections of the dependent claims are appropriate.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-30 are rejected under 35 U.S.C. 101 because the claimed invention is directed to the abstract idea of prompt analysis without significantly more. The claims 1, 12, 23 and 30 recite steps of recognizing a user (i.e., a data analysis step), obtaining context information from a sources (i.e., a data analysis/gathering step), receiving a user prompt for a LXM (i.e., a data analysis step), in response to receiving the user prompt, selecting a user profile from among a plurality of user profiles based on a combination of the user, the user context and the user prompt (i.e., a data analysis step), generating an enhanced prompt based on the user prompt, the user context and information in the selected profile (i.e., a data analysis step), and submitting the enhanced prompt to the LXM (i.e., a data transmission/post solutional step), corresponding to steps achievable by a human in analyzing gathered prompts data and context information and providing an output, and as such, the mental processes category of abstract ideas. This judicial exception is not integrated into a practical application because the claims are directed to an abstract idea with additional generic computer elements (computing device, LXM, memory, processor, processor readable-medium), where the generically recited computer elements do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because steps “generating an enhanced prompt based on the user prompt, the user context information, and information included in the selected user profile” and “submitting the enhanced prompt to the LXM” correspond to the well-understood, routine, conventional computer functions of “collecting information, analyzing it, and displaying certain results of the collection and analysis“ and “receiving or transmitting data over a network” as recognized by the court decisions listed in MPEP § 2106.05, and as presented by cited references Trache and Rofouei.
The dependent claims 2-11, 13-22 and 24-29 also recite mental processes and do not add significantly more than the abstract idea and are as such similarly rejected.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-30 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. In particular the independent claims 1, 12, 23 and 30 recite limitation “in response to receiving the user prompt, selecting a user profile from among a plurality of user profiles based on a combination of the user, the user context information, and the user prompt”. There is no disclosure of this limitation in Applicant’s original specification.
Applicant’s original specification (pg. 6-7, para. [0027]; fig. 2A) describes selecting a user profile from among a plurality of user profiles based on the user, the user context information, and the user prompt subsequent o receiving the user prompt as well as combining contextual data with a user prompt and using the combination to select a suitable user profile, but not selecting a user profile from among a plurality of user profiles based on a combination of the user, the user context information, and the user prompt, in response to receiving the user prompt. The dependent claims are rejected based on their dependency.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
1. Claims 1-5, 10, 12-16, 21, 23-27 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Trache US 2024/0419658 A1 (“Trache”) in view of Rofouei et al US 2024/0289407 A1 (“Rofouei”)
Per claim 1, Trache discloses a method performed by a computing device for generating a prompt for a large generative artificial intelligence model (LXM), comprising:
recognizing a user of the computing device (generate a detailed prompt for the language model based on a “profile” associated with a particular user …, para. [0005]; para. [0048]);
obtaining user context information (para. [0031]; para. [0038]);
receiving a user prompt for the LXM (the system may receive a simple user input …, para. [0005]; para. [0024]-[0025]);
in response to receiving the user prompt, selecting a user profile from among a plurality of user profiles based on a combination of the user, the user context information, and the user prompt (Abstract; fig. 3; the system may receive a simple user input and automatically generate a detailed prompt for the language model based on a “profile” associated with a particular user, role, cohort, use case, organization, and/or context…., para. [0005]; para. [0031]; At block 330, the AIS 102 receives a first natural language input…., para. [0064]; the AIS 102 may determine a profile and template associated with the profile, or update the determined profile and template, after receiving the first natural language input (or at any other time before block 340). For example, in some implementations, block 330 may be performed before blocks 310 and 320, para. [0065]; para. [0066], user profile selected in response to/as a result of natural language input/user prompt where selected profile is selected as a result of user and/or context information, i.e., selected profile is selected as a result of user prompt, user and context information);
generating an enhanced prompt based on the user prompt, the user context information, and information included in the selected user profile (para. [0041]; After receiving the user input 160, the context 170, and the selected profile 180, the prompt generation module 108 may generate an LLM prompt 190 based at least partly on the user input 160, the context 170, and the selected profile 180…., para. [0049]; para. [0066]); and
submitting the enhanced prompt to the LXM (The prompt generation module 108 may then provide the generated LLM prompt 190 to the LLM 130a, para. [0049])
Tache does not explicitly disclose to: obtain user context information from a source of physical context information in the computing device
However, this feature is taught by Rofouei (the user's state may be ascertained, e.g., by retrieving contextual information associated with the user or the client device. Some examples of contextual signals that may be generated by a client device include, for instance, time of day, location, current activity (e.g., determined from an accelerometer and/or gyroscope) …, para. [0004]; para. [0164])
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Rofouei with the method of Trache in arriving at the missing features of Trache, because such combination would have resulted in providing content that is not only directly responsive to the latest query issued by the user, but is tailored to the user's ongoing state (Rofouei, para. [0002]; para. [0004]).
Per claim 2, Trache in view of Rofouei discloses the method of claim 1,
Rofouei discloses determining an activity of the user based on the user context information, wherein selecting the user profile from among the plurality of user profiles based on the user, the user context information, and the user prompt comprises selecting the user profile based at least in part on the determined activity of the user (Some examples of contextual signals that may be generated by a client device include, for instance, time of day, location, current activity (e.g., determined from an accelerometer and/or gyroscope) …, para. [0004]-[0005]; para. [0045]; para. [0047]; para. [0064]; the system, in determining whether the submitter of the query is already familiar with the certain content, compares profile data, of the profile, to the query and/or to search result document(s) that are responsive to the query and/or to one or more related queries. For example, the system can determine that the user is already familiar with the certain content if the comparison indicates the user has already interacted with search result document(s) having the certain content and/or has profile data that directly indicates familiarity with the certain content, para. [0136]; para. [0202]).
Per claim 3, Trache in view of Rofouei discloses the method of claim 1,
Rofouei discloses determining a location of the user based on the user context information, wherein selecting the user profile from among the plurality of user profiles based on the user, the user context information, and the user prompt comprises selecting the user profile based at least in part on the determined location of the user (para. [0004]; para. [0045]; para. [0047]; para. [0064]; the system, in determining whether the submitter of the query is already familiar with the certain content, compares profile data, of the profile, to the query and/or to search result document(s) that are responsive to the query and/or to one or more related queries. For example, the system can determine that the user is already familiar with the certain content if the comparison indicates the user has already interacted with search result document(s) having the certain content and/or has profile data that directly indicates familiarity with the certain content, para. [0136]; User state may also include a variety of contextual signals that may be determined from the user's device and/or from resources controlled by the user, such as the user's past, present, or future location (e.g., position coordinates output by a GPS sensor) …, para. [0164]).
Per claim 4, Trache in view of Rofouei discloses the method of claim 1,
Rofouei discloses determining an output device used by the user based on the user context information, wherein selecting the user profile from among the plurality of user profiles based on the user, the user context information, and the user prompt comprises selecting the user profile based at least in part on the determined output device (the context engine 113 can determine a context and/or update the user's state utilizing current or recent interaction(s) via the client device 110 …, para. [0045]; para. [0047]; para. [0064]; the system determines, based on a profile associated with the query (e.g., a device profile of the client device via which the query was submitted and/or a user profile of the submitter), whether the submitter of the query is already familiar with certain content that is responsive to the query …, para. [0136]; If the user interacts with the generative companion using spoken utterances, in some implementations, chat engine 144 may respond in kind using computer-generated speech output as current output 874. In some implementations in which the user wears an augmented reality (AR) or virtual reality (VR) client device (e.g., AR headset, VR headset, smart glasses, etc.), chat engine 144 may provide current output 874 as annotations of other objects and/or elements depicted in the AR/VR display(s), para. [0190], identified profile associated with user query/interaction, context/interactions used in determining output device).
Per claim 5, Trache in view of Rofouei discloses the method of claim 1,
Rofouei discloses: determining an output device used by the user based on the user context information (para. [0045]; para. [0190]); and
selecting one of a plurality of LXMs based on the determined output device (para. [0171]),
wherein submitting the enhanced prompt to the LXM comprises submitting the enhanced prompt to the selected LXM (para. [0171]).
Per claim 10, Trache in view of Rofouei discloses the method of claim 1,
Trache discloses updating the selected user profile based on how the user responds to an output received from the LXM in response to the enhanced prompt (para. [0071]).
Per claim 12, Trache discloses a computing device, comprising:
a memory (para. [0097]-[0098]);
at least one processor coupled to the memory and configured (para. [0097]-[0098]) to: recognize a user of the computing device (generate a detailed prompt for the language model based on a “profile” associated with a particular user …, para. [0005]; para. [0048]);
obtain user context information (para. [0031]; para. [0038]);
receive a user prompt for a large generative artificial intelligence model (LXM) (the system may receive a simple user input …, para. [0005]; para. [0024]-[0025]);
in response to receiving the user prompt, selecting a user profile from among a plurality of user profiles based on a combination of the user, the user context information, and the user prompt (Abstract; fig. 3; the system may receive a simple user input and automatically generate a detailed prompt for the language model based on a “profile” associated with a particular user, role, cohort, use case, organization, and/or context…., para. [0005]; para. [0031]; At block 330, the AIS 102 receives a first natural language input…., para. [0064]; the AIS 102 may determine a profile and template associated with the profile, or update the determined profile and template, after receiving the first natural language input (or at any other time before block 340). For example, in some implementations, block 330 may be performed before blocks 310 and 320, para. [0065]; para. [0066], user profile selected in response to/as a result of natural language input/user prompt where selected profile is selected as a result of user and/or context information, i.e., selected profile is selected as a result of user prompt, user and context information);
generate an enhanced prompt based on the user prompt, the user context information, and information included in the selected user profile (para. [0041]; After receiving the user input 160, the context 170, and the selected profile 180, the prompt generation module 108 may generate an LLM prompt 190 based at least partly on the user input 160, the context 170, and the selected profile 180…., para. [0049]); and
submit the enhanced prompt to the LXM (The prompt generation module 108 may then provide the generated LLM prompt 190 to the LLM 130a, para. [0049])
Tache does not explicitly disclose to: obtain user context information from a source of physical context information in the computing device
However, this feature is taught by Rofouei (the user's state may be ascertained, e.g., by retrieving contextual information associated with the user or the client device. Some examples of contextual signals that may be generated by a client device include, for instance, time of day, location, current activity (e.g., determined from an accelerometer and/or gyroscope) …, para. [0004]; para. [0164])
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Rofouei with the device of Trache in arriving at the missing features of Trache, because such combination would have resulted in providing content that is not only directly responsive to the latest query issued by the user, but is tailored to the user's ongoing state (Rofouei, para. [0002]; para. [0004]).
Per claim 13, Trache in view of Rofouei discloses the computing device of claim 12,
Rofouei discloses wherein the at least one processor is further configured to: determine an activity of the user based on the user context information (Some examples of contextual signals that may be generated by a client device include, for instance, time of day, location, current activity (e.g., determined from an accelerometer and/or gyroscope) …, para. [0004]-[0005]; para. [0045]; para. [0202]; para. [0136]).and
select the user profile from among the plurality of user profiles based on the user, the user context information, the user prompt, and at least in part on the determined activity of the user (Some examples of contextual signals that may be generated by a client device include, for instance, time of day, location, current activity (e.g., determined from an accelerometer and/or gyroscope) …, para. [0004]-[0005]; para. [0045]; para. [0047]; para. [0064]; para. [0202]; the system, in determining whether the submitter of the query is already familiar with the certain content, compares profile data, of the profile, to the query and/or to search result document(s) that are responsive to the query and/or to one or more related queries. For example, the system can determine that the user is already familiar with the certain content if the comparison indicates the user has already interacted with search result document(s) having the certain content and/or has profile data that directly indicates familiarity with the certain content, para. [0136]).
Per claim 14, Trache in view of Rofouei discloses the computing device of claim 12,
Rofouei discloses: wherein the at least one processor is further configured to: determine a location of the user based on the user context information (para. [0164]); and
select the user profile from among the plurality of user profiles based on the user, the user context information, the user prompt, and at least in part on the determined location of the user (para. [0004]; para. [0045]; para. [0047]; para. [0064]; the system, in determining whether the submitter of the query is already familiar with the certain content, compares profile data, of the profile, to the query and/or to search result document(s) that are responsive to the query and/or to one or more related queries. For example, the system can determine that the user is already familiar with the certain content if the comparison indicates the user has already interacted with search result document(s) having the certain content and/or has profile data that directly indicates familiarity with the certain content, para. [0136]; User state may also include a variety of contextual signals that may be determined from the user's device and/or from resources controlled by the user, such as the user's past, present, or future location (e.g., position coordinates output by a GPS sensor) …, para. [0164]).
Per claim 15, Trache in view of Rofouei discloses the computing device of claim 12,
Rofouei discloses wherein the at least one processor is further configured to: determine an output device used by the user based on the user context information, select the user profile from among the plurality of user profiles based on the user, the user context information, the user prompt, and at least in part on the determined output device (the context engine 113 can determine a context and/or update the user's state utilizing current or recent interaction(s) via the client device 110 …, para. [0045]; para. [0047]; para. [0064]; the system determines, based on a profile associated with the query (e.g., a device profile of the client device via which the query was submitted and/or a user profile of the submitter), whether the submitter of the query is already familiar with certain content that is responsive to the query …, para. [0136]; If the user interacts with the generative companion using spoken utterances, in some implementations, chat engine 144 may respond in kind using computer-generated speech output as current output 874. In some implementations in which the user wears an augmented reality (AR) or virtual reality (VR) client device (e.g., AR headset, VR headset, smart glasses, etc.), chat engine 144 may provide current output 874 as annotations of other objects and/or elements depicted in the AR/VR display(s), para. [0190], identified profile associated with user query/interaction, context/interactions used in determining output device).
Per claim 16, Trache in view of Rofouei discloses the computing device of claim 12,
Rofouei discloses: wherein the at least one processor is further configured to: determine an output device used by the user based on the user context information (para. [0045]; para. [0190]); and
select one of a plurality of LXMs based on the determined output device (para. [0171]); and
submit the enhanced prompt to the selected LXM (para. [0171]).
Per claim 21, Trache in view of Rofouei discloses the computing device of claim 12,
Trache discloses wherein the at least one processor is further configured to update the selected user profile based on how the user responds to an output received from the LXM in response to the enhanced prompt (para. [0071]).
Per claim 23, Trache discloses a computing device, comprising:
means for recognizing a user of the computing device (generate a detailed prompt for the language model based on a “profile” associated with a particular user …, para. [0005]; para. [0048]; para. [0097]-[0098]);
means for obtaining user context information (para. [0031]; para. [0038]; para. [0097]-[0098]);
means for receiving a user prompt for a large generative artificial intelligence model (LXM) (the system may receive a simple user input …, para. [0005]; para. [0024]-[0025]; para. [0097]-[0098]);
means for selecting in response to receiving the user prompt, a user profile from among a plurality of user profiles based on a combination of the user, the user context information, and the user prompt (Abstract; fig. 3; the system may receive a simple user input and automatically generate a detailed prompt for the language model based on a “profile” associated with a particular user, role, cohort, use case, organization, and/or context…., para. [0005]; para. [0031]; At block 330, the AIS 102 receives a first natural language input…., para. [0064]; the AIS 102 may determine a profile and template associated with the profile, or update the determined profile and template, after receiving the first natural language input (or at any other time before block 340). For example, in some implementations, block 330 may be performed before blocks 310 and 320, para. [0065]; user profile selected in response to/as a result of natural language input/user prompt where selected profile is selected as a result of user and/or context information, i.e., selected profile is selected as a result of user prompt, user and context information);
means for generating an enhanced prompt based on the user prompt, the user context information, and information included in the selected user profile (para. [0041]; After receiving the user input 160, the context 170, and the selected profile 180, the prompt generation module 108 may generate an LLM prompt 190 based at least partly on the user input 160, the context 170, and the selected profile 180…., para. [0049]; para. [0097]-[0098]); and
means for submitting the enhanced prompt to the LXM (The prompt generation module 108 may then provide the generated LLM prompt 190 to the LLM 130a, para. [0049]; para. [0097]-[0098])
Tache does not explicitly disclose to: obtain user context information from a source of physical context information in the computing device
However, this feature is taught by Rofouei (the user's state may be ascertained, e.g., by retrieving contextual information associated with the user or the client device. Some examples of contextual signals that may be generated by a client device include, for instance, time of day, location, current activity (e.g., determined from an accelerometer and/or gyroscope) …, para. [0004]; para. [0164])
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Rofouei with the device of Trache in arriving at the missing features of Trache, because such combination would have resulted in providing content that is not only directly responsive to the latest query issued by the user, but is tailored to the user's ongoing state (Rofouei, para. [0002]; para. [0004]).
Per claim 24, Trache in view of Rofouei discloses the computing device of claim 23,
Rofouei discloses means for determining an activity of the user based on the user context information, wherein means for selecting the user profile from among the plurality of user profiles based on the user, the user context information, and the user prompt means for comprises selecting the user profile based at least in part on the determined activity of the user (Some examples of contextual signals that may be generated by a client device include, for instance, time of day, location, current activity (e.g., determined from an accelerometer and/or gyroscope) …, para. [0004]-[0005]; para. [0045]; para. [0047]; para. [0064]; para. [0202]; the system, in determining whether the submitter of the query is already familiar with the certain content, compares profile data, of the profile, to the query and/or to search result document(s) that are responsive to the query and/or to one or more related queries. For example, the system can determine that the user is already familiar with the certain content if the comparison indicates the user has already interacted with search result document(s) having the certain content and/or has profile data that directly indicates familiarity with the certain content, para. [0136]).
Per claim 25, Trache in view of Rofouei discloses the computing device of claim 23,
Rofouei discloses means for determining a location of the user based on the user context information, wherein means for selecting the user profile from among the plurality of user profiles based on the user, the user context information, and the user prompt comprises means for selecting the user profile based at least in part on the determined location of the user (para. [0004]; para. [0045]; para. [0047]; para. [0064]; the system, in determining whether the submitter of the query is already familiar with the certain content, compares profile data, of the profile, to the query and/or to search result document(s) that are responsive to the query and/or to one or more related queries. For example, the system can determine that the user is already familiar with the certain content if the comparison indicates the user has already interacted with search result document(s) having the certain content and/or has profile data that directly indicates familiarity with the certain content, para. [0136]; User state may also include a variety of contextual signals that may be determined from the user's device and/or from resources controlled by the user, such as the user's past, present, or future location (e.g., position coordinates output by a GPS sensor) …, para. [0164]).
Per claim 26, Trache in view of Rofouei discloses the computing device of claim 23,
Rofouei discloses means for determining an output device used by the user based on the user context information, wherein means for selecting the user profile from among the plurality of user profiles based on the user, the user context information, and the user prompt comprises means for selecting the user profile based at least in part on the determined output device (the context engine 113 can determine a context and/or update the user's state utilizing current or recent interaction(s) via the client device 110 …, para. [0045]; para. [0047]; para. [0064]; the system determines, based on a profile associated with the query (e.g., a device profile of the client device via which the query was submitted and/or a user profile of the submitter), whether the submitter of the query is already familiar with certain content that is responsive to the query …, para. [0136]; If the user interacts with the generative companion using spoken utterances, in some implementations, chat engine 144 may respond in kind using computer-generated speech output as current output 874. In some implementations in which the user wears an augmented reality (AR) or virtual reality (VR) client device (e.g., AR headset, VR headset, smart glasses, etc.), chat engine 144 may provide current output 874 as annotations of other objects and/or elements depicted in the AR/VR display(s), para. [0190], identified profile associated with user query/interaction, context/interactions used in determining output device).
Per claim 27, Trache in view of Rofouei discloses the computing device of claim 23, further comprising:
Rofouei discloses: means for determining an output device used by the user based on the user context information (para. [0045]; para. [0190]); and
means for selecting one of a plurality of LXMs based on the determined output device (para. [0171]),
wherein means for submitting the enhanced prompt to the LXM comprises means for submitting the enhanced prompt to the selected LXM (para. [0171]).
Per claim 30, Trache discloses a non-transitory processor-readable medium having stored thereon process- executable instructions configured to cause a at least one processor of a computing device to perform operations comprising:
recognizing a user of the computing device (generate a detailed prompt for the language model based on a “profile” associated with a particular user …, para. [0005]; para. [0048]);
obtaining user context information (para. [0031]; para. [0038]);
receiving a user prompt for a large generative artificial intelligence model (LXM) (the system may receive a simple user input …, para. [0005]; para. [0024]-[0025]);
in response to receiving the user prompt, selecting a user profile from among a plurality of user profiles based on a combination of the user, the user context information, and the user prompt (Abstract; fig. 3; the system may receive a simple user input and automatically generate a detailed prompt for the language model based on a “profile” associated with a particular user, role, cohort, use case, organization, and/or context…., para. [0005]; para. [0031]; At block 330, the AIS 102 receives a first natural language input…., para. [0064]; the AIS 102 may determine a profile and template associated with the profile, or update the determined profile and template, after receiving the first natural language input (or at any other time before block 340). For example, in some implementations, block 330 may be performed before blocks 310 and 320, para. [0065]; para. [0066], user profile selected in response to/as a result of natural language input/user prompt where selected profile is selected as a result of user and/or context information, i.e., selected profile is selected as a result of user prompt, user and context information);
generating an enhanced prompt based on the user prompt, the user context information, and information included in the selected user profile (para. [0041]; After receiving the user input 160, the context 170, and the selected profile 180, the prompt generation module 108 may generate an LLM prompt 190 based at least partly on the user input 160, the context 170, and the selected profile 180…., para. [0049]); and
submitting the enhanced prompt to the LXM (The prompt generation module 108 may then provide the generated LLM prompt 190 to the LLM 130a, para. [0049])
Tache does not explicitly disclose to: obtain user context information from a source of physical context information in the computing device
However, this feature is taught by Rofouei (the user's state may be ascertained, e.g., by retrieving contextual information associated with the user or the client device. Some examples of contextual signals that may be generated by a client device include, for instance, time of day, location, current activity (e.g., determined from an accelerometer and/or gyroscope) …, para. [0004]; para. [0164])
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Rofouei with the medium of Trache in arriving at the missing features of Trache, because such combination would have resulted in providing content that is not only directly responsive to the latest query issued by the user, but is tailored to the user's ongoing state (Rofouei, para. [0002]; para. [0004]).
2. Claims 6-9, 11, 17-20, 22, 28 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Trache in view of Rofouei as applied to claims 1, 12 and 23 above, and further in view of Maurer et al US 2024/0176960 A1 (“Maurer”)
Per claim 6, Trache in view of Rofouei discloses the method of claim 1,
Trache in view of Rofouei does not explicitly disclose determining from the user context information whether the user is communicating with another person, wherein selecting the user profile from among the plurality of user profiles based on the user, the user context information, and the user prompt comprises selecting a user profile that is appropriate for communicating with another person regarding subject matter in the user prompt
However, this feature is taught by Maurer (the messaging component 116 can receive, from a first user account, a message transmitted in association with a virtual space. In response to receiving the message (e.g., interaction data associated with an interaction of a first user with the virtual space), the messaging component 116 can identify a second user associated with the virtual space (e.g., another user that is a member of the virtual space).…, para. [0030]; the audio and/or video component 118 can store user identifiers associated with user accounts of members of a particular audio and/or video conversation, such as to identify user(s) with appropriate permissions to access the particular audio and/or video conversation, para. [0032]-[0033]; the user/org data 129 can store data in user profiles (which can also be referred to as “user accounts”), which can store data associated with a user, including, but not limited to, one or more user identifiers associated with multiple, different organizations or entities …, para. [0042]; para. [0051]; para. [0089], message as including subject matter, identifying users associated with user profiles/accounts as selecting user profiles/accounts appropriate for communicating message)
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Maurer with the method of Trache in view of Rofouei, in arriving at the missing features of Trache in view of Rofouei, because such combination would have resulted in more robustly transcribing and/or summarizing synchronous or asynchronous multimedia collaboration sessions in a group-based communication platform (Maurer, para. [0017]).
Per claim 7, Trache in view of Rofouei discloses the method of claim 1,
Trache in view of Rofouei does not explicitly disclose determining from the user context information whether the user is communicating with another person, determining from the user context information a relationship or identity of the other person in response to determining that the user is communicating with the other person; or selecting another person profile from a plurality of other person profiles based on the determined relationship or identity of the other person, wherein generating the enhanced prompt based on the user prompt, the user context information, and information included in the selected user profile comprises generating the enhanced prompt based on the user prompt, the user context information, information included in the selected user profile, and information included in the selected another person profile
However, these features are taught by Maurer:
determining from the user context information whether the user is communicating with another person (the messaging component 116 can receive, from a first user account, a message transmitted in association with a virtual space. In response to receiving the message (e.g., interaction data associated with an interaction of a first user with the virtual space), the messaging component 116 can identify a second user associated with the virtual space (e.g., another user that is a member of the virtual space).…, para. [0030]; the audio and/or video component 118 can store user identifiers associated with user accounts of members of a particular audio and/or video conversation, such as to identify user(s) with appropriate permissions to access the particular audio and/or video conversation, para. [0032]-[0033]);
determining from the user context information a relationship or identity of the other person in response to determining that the user is communicating with the other person (para. [0032]-[0033]); and
selecting another person profile from a plurality of other person profiles based on the determined relationship or identity of the other person, wherein generating the enhanced prompt based on the user prompt, the user context information, and information included in the selected user profile comprises generating the enhanced prompt based on the user prompt, the user context information, information included in the selected user profile, and information included in the selected another person profile (para. [0032]-[0033]; the text may identify by username or other ID, one or more users and the respective actions they are to take …, para. [0141]; The group-based communication system may include communication data such as messages, queries, files, mentions, users or user profiles, interactions, tickets, channels, applications integrated into one or more channels, conversations …, para. [0065]; a ML model(s) may be configured to receive, as input, the raw audio-visual data, user reaction data (e.g., an emoji selected by a user), a detected gesture in the video (e.g., a user shaking their head “no”, nodding “yes”, waving goodbye, and the like), messages or text input by the user (e.g., a user making an edit to the AI notes 610), a thread of messages input by a plurality of users …, para. [0164]);
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Maurer with the method of Trache in view of Rofouei, in arriving at the missing features of Trache in view of Rofouei, because such combination would have resulted in more robustly transcribing and/or summarizing synchronous or asynchronous multimedia collaboration sessions in a group-based communication platform (Maurer, para. [0017]).
Per claim 8, Trache in view of Rofouei discloses the method of claim 1,
Trache in view of Rofouei does not explicitly disclose determining from either or both of the user prompt and the user context information an urgency level or selecting the user profile based on the urgency level
However, these features are taught by Maurer;
determining from either or both of the user prompt and the user context information an urgency level (para. [0170]); and
selecting the user profile based on the urgency level (a user may have an urgent decision and want immediate verbal feedback from other members of the channel. As another example, a synchronous multimedia collaboration session may be initiated with one or more other users of the group-based communication system through direct messaging. In some examples, the audience of a synchronous multimedia collaboration session may be determined based on the context in which the synchronous multimedia collaboration session was initiated.…, para. [0091]; para. [0170])
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Maurer with the method of Trache in view of Rofouei, in arriving at the missing features of Trache in view of Rofouei, because such combination would have resulted in more robustly transcribing and/or summarizing synchronous or asynchronous multimedia collaboration sessions in a group-based communication platform (Maurer, para. [0017]).
Per claim 9, Trache in view of Rofouei and Maurer discloses the method of claim 8,
Rofouei discloses: selecting one of a plurality of LXMs based at least in part on one or more of the following: an urgency level; the selected user profile; the user context information (para. [0214]); and one or more input or output devices,
wherein submitting the enhanced prompt to the LXM comprises submitting the enhanced prompt to the selected LXM (para. [0214]).
Per claim 11, Trache in view of Rofouei discloses the method of claim 1,
Trache in view of Rofouei does not explicitly disclose obtaining user profiles associated with users of devices in communication with the computing device, generating the enhanced prompt based on the selected user profile, the obtained user profiles associated with the users of the devices in communication with the computing device, the user context information, and the received user prompt, generating one or more LXM parameters to be sent with the enhanced prompt or submitting the enhanced prompt and the one or more LXM parameters to the LXM
However, these features are taught by Maurer:
obtaining user profiles associated with users of devices in communication with the computing device (para. [0030]; para. [0039]);
generating the enhanced prompt based on the selected user profile, the obtained user profiles associated with the users of the devices in communication with the computing device, the user context information, and the received user prompt (The group-based communication system may include communication data such as messages, queries, files, mentions, users or user profiles, interactions, tickets, channels, applications integrated into one or more channels, conversations …, para. [0065]; para. [0164]);
generating one or more LXM parameters to be sent with the enhanced prompt (a ML model(s) may be configured to receive, as input, the raw audio-visual data, user reaction data (e.g., an emoji selected by a user), a detected gesture in the video (e.g., a user shaking their head “no”, nodding “yes”, waving goodbye, and the like), messages or text input by the user (e.g., a user making an edit to the AI notes 610), a thread of messages input by a plurality of users …, para. [0065]; para. [0164]); and
submitting the enhanced prompt and the one or more LXM parameters to the LXM (Abstract; para. [0065]; The output from a first ML model configured to filter audio-visual data and/or user interaction data may be input into another ML model (e.g., summarization model such as a large language model (LLM)) …, para. [0164]).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Maurer with the method of Trache in view of Rofouei, in arriving at the missing features of Trache in view of Rofouei, because such combination would have resulted in more robustly transcribing and/or summarizing synchronous or asynchronous multimedia collaboration sessions in a group-based communication platform (Maurer, para. [0017]).
Per claim 17, Trache in view of Rofouei discloses the computing device of claim 12,
Trache in view of Rofouei does not explicitly disclose wherein the at least one processor is further configured to: determine from the user context information whether the user is communicating with another person or select the user profile from among the plurality of user profiles based on the user, the user context information, the user prompt, and a user profile that is appropriate for communicating with another person regarding subject matter in the user prompt
However, these features are taught by Maurer:
wherein the at least one processor is further configured to: determine from the user context information whether the user is communicating with another person ((the messaging component 116 can receive, from a first user account, a message transmitted in association with a virtual space. In response to receiving the message (e.g., interaction data associated with an interaction of a first user with the virtual space), the messaging component 116 can identify a second user associated with the virtual space (e.g., another user that is a member of the virtual space).…, para. [0030]; the audio and/or video component 118 can store user identifiers associated with user accounts of members of a particular audio and/or video conversation, such as to identify user(s) with appropriate permissions to access the particular audio and/or video conversation, para. [0032]-[0033]); and
select the user profile from among the plurality of user profiles based on the user, the user context information, the user prompt, and a user profile that is appropriate for communicating with another person regarding subject matter in the user prompt (the messaging component 116 can receive, from a first user account, a message transmitted in association with a virtual space. In response to receiving the message (e.g., interaction data associated with an interaction of a first user with the virtual space), the messaging component 116 can identify a second user associated with the virtual space (e.g., another user that is a member of the virtual space).…, para. [0030]; the audio and/or video component 118 can store user identifiers associated with user accounts of members of a particular audio and/or video conversation, such as to identify user(s) with appropriate permissions to access the particular audio and/or video conversation, para. [0032]-[0033]; the user/org data 129 can store data in user profiles (which can also be referred to as “user accounts”), which can store data associated with a user, including, but not limited to, one or more user identifiers associated with multiple, different organizations or entities …, para. [0042]; para. [0051]; para. [0089], message as including subject matter, identifying users associated with user profiles/accounts as selecting user profiles/accounts appropriate for communicating message)
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Maurer with the device of Trache in view of Rofouei, in arriving at the missing features of Trache in view of Rofouei, because such combination would have resulted in more robustly transcribing and/or summarizing synchronous or asynchronous multimedia collaboration sessions in a group-based communication platform (Maurer, para. [0017]).
Per claim 18, Trache in view of Rofouei discloses the computing device of claim 12,
Trache in view of Rofouei does not explicitly disclose wherein the at least one processor is further configured to: determine from the user context information whether the user is communicating with another person, determine from the user context information a relationship or identity of the other person in response to determining that the user is communicating with the other person or select another person profile from a plurality of other person profiles based on the determined relationship or identity of the other person, generate the enhanced prompt based on the user prompt, the user context information, information included in the selected user profile, and information included in the selected another person profile
However, these features are taught by Maurer:
wherein the at least one processor is further configured to: determine from the user context information whether the user is communicating with another person (the messaging component 116 can receive, from a first user account, a message transmitted in association with a virtual space. In response to receiving the message (e.g., interaction data associated with an interaction of a first user with the virtual space), the messaging component 116 can identify a second user associated with the virtual space (e.g., another user that is a member of the virtual space).…, para. [0030]; the audio and/or video component 118 can store user identifiers associated with user accounts of members of a particular audio and/or video conversation, such as to identify user(s) with appropriate permissions to access the particular audio and/or video conversation, para. [0032]-[0033]);
determine from the user context information a relationship or identity of the other person in response to determining that the user is communicating with the other person (para. [0032]-[0033]); and
select another person profile from a plurality of other person profiles based on the determined relationship or identity of the other person, generate the enhanced prompt based on the user prompt, the user context information, information included in the selected user profile, and information included in the selected another person profile (para. [0032]-[0033]; the text may identify by username or other ID, one or more users and the respective actions they are to take …, para. [0141]; The group-based communication system may include communication data such as messages, queries, files, mentions, users or user profiles, interactions, tickets, channels, applications integrated into one or more channels, conversations …, para. [0065]; a ML model(s) may be configured to receive, as input, the raw audio-visual data, user reaction data (e.g., an emoji selected by a user), a detected gesture in the video (e.g., a user shaking their head “no”, nodding “yes”, waving goodbye, and the like), messages or text input by the user (e.g., a user making an edit to the AI notes 610), a thread of messages input by a plurality of users …, para. [0164]);
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Maurer with the device of Trache in view of Rofouei, in arriving at the missing features of Trache in view of Rofouei, because such combination would have resulted in more robustly transcribing and/or summarizing synchronous or asynchronous multimedia collaboration sessions in a group-based communication platform (Maurer, para. [0017]).
Per claim 19, Trache in view of Rofouei discloses the computing device of claim 12,
Trache in view of Rofouei does not explicitly disclose wherein the at least one processor is further configured to: determine from either or both of the user prompt and the user context information an urgency level or select the user profile based on the urgency level
However, these features are taught by Maurer;
wherein the at least one processor is further configured to: determine from either or both of the user prompt and the user context information an urgency level (para. [0170]); and
select the user profile based on the urgency level (a user may have an urgent decision and want immediate verbal feedback from other members of the channel. As another example, a synchronous multimedia collaboration session may be initiated with one or more other users of the group-based communication system through direct messaging. In some examples, the audience of a synchronous multimedia collaboration session may be determined based on the context in which the synchronous multimedia collaboration session was initiated.…, para. [0091]; para. [0170])
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Maurer with the device of Trache in view of Rofouei, in arriving at the missing features of Trache in view of Rofouei, because such combination would have resulted in more robustly transcribing and/or summarizing synchronous or asynchronous multimedia collaboration sessions in a group-based communication platform (Maurer, para. [0017]).
Per claim 20, Trache in view of Rofouei and Maurer discloses the computing device of claim 19,
Rofouei discloses: wherein the at least one processor is further configured to: select one of a plurality of LXMs based at least in part on one or more of the following: an urgency level; the selected user profile; the user context information (para. [0214]); or one or more input or output devices; and
submit the enhanced prompt to the selected LXM = (para. [0214]).
Per claim 22, Trache in view of Rofouei discloses the computing device of claim 12,
Trache in view of Rofouei does not explicitly disclose wherein the at least one processor is further configured to: obtain user profiles associated with users of devices in communication with the computing device, generate the enhanced prompt based on the selected user profile, the obtained user profiles associated with the users of the devices in communication with the computing device, the user context information, and the received user prompt, generate one or more LXM parameters to be sent with the enhanced prompt or submit the enhanced prompt and the one or more LXM parameters to the LXM
However, these features are taught by Maurer:
wherein the at least one processor is further configured to: obtain user profiles associated with users of devices in communication with the computing device (para. [0030]; para. [0039]);
generate the enhanced prompt based on the selected user profile, the obtained user profiles associated with the users of the devices in communication with the computing device, the user context information, and the received user prompt (The group-based communication system may include communication data such as messages, queries, files, mentions, users or user profiles, interactions, tickets, channels, applications integrated into one or more channels, conversations …, para. [0065]; para. [0164]);
generate one or more LXM parameters to be sent with the enhanced prompt (a ML model(s) may be configured to receive, as input, the raw audio-visual data, user reaction data (e.g., an emoji selected by a user), a detected gesture in the video (e.g., a user shaking their head “no”, nodding “yes”, waving goodbye, and the like), messages or text input by the user (e.g., a user making an edit to the AI notes 610), a thread of messages input by a plurality of users …, para. [0065]; para. [0164]); and
submit the enhanced prompt and the one or more LXM parameters to the LXM (Abstract; para. [0065]; The output from a first ML model configured to filter audio-visual data and/or user interaction data may be input into another ML model (e.g., summarization model such as a large language model (LLM)) …, para. [0164]).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Maurer with the device of Trache in view of Rofouei, in arriving at the missing features of Trache in view of Rofouei, because such combination would have resulted in more robustly transcribing and/or summarizing synchronous or asynchronous multimedia collaboration sessions in a group-based communication platform (Maurer, para. [0017]).
Per claim 28, Trache in view of Rofouei discloses the computing device of claim 23,
Trache in view of Rofouei does not explicitly disclose means for determining from the user context information whether the user is communicating with another person, wherein means for selecting the user profile from among the plurality of user profiles based on the user, the user context information, the user prompt comprises means for selecting a user profile that is appropriate for communicating with another person regarding subject matter in the user prompt
However, this feature is taught by Maurer (the messaging component 116 can receive, from a first user account, a message transmitted in association with a virtual space. In response to receiving the message (e.g., interaction data associated with an interaction of a first user with the virtual space), the messaging component 116 can identify a second user associated with the virtual space (e.g., another user that is a member of the virtual space).…, para. [0030]; the audio and/or video component 118 can store user identifiers associated with user accounts of members of a particular audio and/or video conversation, such as to identify user(s) with appropriate permissions to access the particular audio and/or video conversation, para. [0032]-[0033]; the user/org data 129 can store data in user profiles (which can also be referred to as “user accounts”), which can store data associated with a user, including, but not limited to, one or more user identifiers associated with multiple, different organizations or entities …, para. [0042]; para. [0051]; para. [0089], message as including subject matter, identifying users associated with user profiles/accounts as selecting user profiles/accounts appropriate for communicating message)
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Maurer with the device of Trache in view of Rofouei, in arriving at the missing features of Trache in view of Rofouei, because such combination would have resulted in more robustly transcribing and/or summarizing synchronous or asynchronous multimedia collaboration sessions in a group-based communication platform (Maurer, para. [0017]).
Per claim 29, Trache in view of Rofouei discloses the computing device of claim 23,
Trache in view of Rofouei does not explicitly disclose means for determining from the user context information whether the user is communicating with another person, means for determining from the user context information a relationship or identity of the other person in response to determining that the user is communicating with the other person, means for selecting another person profile from a plurality of other person profiles based on the determined relationship or identity of the other person, or wherein means for generating the enhanced prompt based on the user prompt, the user context information, and information included in the selected user profile comprises means for generating the enhanced prompt based on the user prompt, the user context information, information included in the selected user profile, and information included in the selected another person profile.
However, these features are taught by Maurer:
means for determining from the user context information whether the user is communicating with another person (the messaging component 116 can receive, from a first user account, a message transmitted in association with a virtual space. In response to receiving the message (e.g., interaction data associated with an interaction of a first user with the virtual space), the messaging component 116 can identify a second user associated with the virtual space (e.g., another user that is a member of the virtual space).…, para. [0030]; the audio and/or video component 118 can store user identifiers associated with user accounts of members of a particular audio and/or video conversation, such as to identify user(s) with appropriate permissions to access the particular audio and/or video conversation, para. [0032]-[0033]);
means for determining from the user context information a relationship or identity of the other person in response to determining that the user is communicating with the other person (para. [0032]-[0033]); and
means for selecting another person profile from a plurality of other person profiles based on the determined relationship or identity of the other person, wherein means for generating the enhanced prompt based on the user prompt, the user context information, and information included in the selected user profile comprises means for generating the enhanced prompt based on the user prompt, the user context information, information included in the selected user profile, and information included in the selected another person profile (para. [0032]-[0033]; the text may identify by username or other ID, one or more users and the respective actions they are to take …, para. [0141]; The group-based communication system may include communication data such as messages, queries, files, mentions, users or user profiles, interactions, tickets, channels, applications integrated into one or more channels, conversations …, para. [0065]; a ML model(s) may be configured to receive, as input, the raw audio-visual data, user reaction data (e.g., an emoji selected by a user), a detected gesture in the video (e.g., a user shaking their head “no”, nodding “yes”, waving goodbye, and the like), messages or text input by the user (e.g., a user making an edit to the AI notes 610), a thread of messages input by a plurality of users …, para. [0164])
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Maurer with the device of Trache in view of Rofouei, in arriving at the missing features of Trache in view of Rofouei, because such combination would have resulted in more robustly transcribing and/or summarizing synchronous or asynchronous multimedia collaboration sessions in a group-based communication platform (Maurer, para. [0017]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO 892 form.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLUJIMI A ADESANYA whose telephone number is (571)270-3307. The examiner can normally be reached Monday-Friday 8:30-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at 571-272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/OLUJIMI A ADESANYA/Primary Examiner, Art Unit 2658