DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This communication is a First Office Action Non-Final on Merits. Claims 1-19 are currently pending and have been considered below.
Priority
The present application, filed on 01/14/2025, claims priority to Provisional Application 63/620,952, filed on 01/15/2024.
Drawings
The drawings are objected to because Figures 3-14 are improper because they are photographs, but the concepts could practicably be depicted in a line drawing and because the text is not legible (37 CFR 1.84(b)(1)). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Objections
Claims 3, 10 and 17 are objected to because of the following informality: Claims 3, 10 and 17 recite “to prompt the collected of the further related project initiation data” and it should be – to prompt collection of further related project initiation data --. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AlA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AlA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1, 8 and 15 recite the acronym “ML”. However, Applicant has failed to explicitly define what the acronym refers to. Further clarification is needed. Dependent claims inherit the deficiencies of the parent claims and thus dependent claims are rejected on the same basis as indicated above for the respective parent claims. Further, Claim 2 recites the limitation "the persona database". Claim 2, depends from claim 1 and claim 1 does not recite a persona database. There is insufficient antecedent basis for this limitation in the claim. Further, Claim 9 recites the limitation "the persona database". Claim 9, depends from claim 8 and claim 8 does not recite a persona database. There is insufficient antecedent basis for this limitation in the claim. Further, Claim 16 recites the limitation "the persona database". Claim 16, depends from claim 15 and claim 15 does not recite a persona database. There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 - 19 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception without a practical application and significantly more.
Step 1: Identifying Statutory Categories
When considering subject matter eligibility under 35 U.S.C. § 101, it must be determined whether the claims are directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter (i.e., Step 1). In the instant case, claims 1-7 are directed to a method (i.e. a process). Claims 8-14 are directed to a system (i.e. a machine). Claims 15-19 are directed to a computer program product (i.e. an article of manufacture). Thus, each of these claims fall within one of the four statutory categories. Nevertheless, the claims fall within the judicial exception of an abstract idea.
Step 2A: Prong One: Abstract Ideas
Claims 1-19 are rejected under 35 U.S.C. 101 because the claimed invention recites an abstract idea. Independent claim 1, analogous to independent claims 8 and 15 recites: method for generating project initiation using a plurality of personas and a corresponding plurality of questions, comprising: transmitting, project initiation data usable to render, the project initiation defined to include at least one representative, each representative corresponding to one of the personas in the plurality of personas and each representative associated with at least one associated question from the plurality of questions; receiving collected project initiation data from a user, wherein the collected project initiation data is received subsequent to presenting a question prompt to the user, wherein the question prompt includes a particular representative and a particular associated question associated with that particular representative; transmitting an assessment prompt, wherein the assessment prompt includes at least one selected question from the plurality of questions and the collected project initiation data received from the user; receiving, an assessment response in response to the assessment prompt, wherein the assessment response indicates whether the collected project initiation data is responsive to the at least one selected question; updating with at least one unanswered question from the plurality of questions based on the assessment response.
The limitations as drafted, is a process that, under its broadest reasonable interpretation, falls under the abstract groupings of: Certain methods of organizing human activity (commercial or legal interactions (including advertising, marketing or sales activities or behaviors; business relations; (managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). As the claims discuss a system for project initiation including using a plurality of personas and corresponding questions and collecting responses, which is one of certain methods of organizing human activity.
Mental Processes (concepts performed in the human mind (including an observation, evaluation, judgement, opinion (claim 1 recites for example, “generating project initiation using a plurality of personas and a corresponding plurality of questions”; “transmitting, project initiation data to include at least one representative, each representative corresponding to one of the personas in the plurality of personas and each representative associated with at least one associated question from the plurality of questions; “receiving collected project initiation data from a user, wherein the collected project initiation data is received subsequent to presenting a question prompt to the user, wherein the question prompt includes a particular representative and a particular associated question associated with that particular representative”; “transmitting an assessment prompt, wherein the assessment prompt includes at least one selected question from the plurality of questions and the collected project initiation data received from the user”; :receiving, an assessment response in response to the assessment prompt, wherein the assessment response indicates whether the collected project initiation data is responsive to the at least one selected question; “updating with at least one unanswered question from the plurality of questions based on the assessment response.”) Concepts performed in the human mind as mental processes because the steps of generating, receiving, collecting, updating, transmitting and analyzing data mimic human thought processes of observation, evaluation, judgement and opinion, perhaps with paper and pencil, where data interpretation is perceptible in the human mind. See In re TLI Commc’ns LLCPatentLitig., 823 F.3d 607, 611 (Fed. Cir. 2016); FairWarning IP, LLC v. Iatric Sys., Inc., 839 F.3d 1089, 1093-94 (Fed. Cir. 2016)).
Further, dependent claims add additional limitations, for example: (claims 2, 9 and 16) selecting the plurality of questions from the persona based on a selected set of personas and a priority metric associated with each question in the plurality of questions; (claims 3, 10 and 17) identifying a sub-grouping of related questions based on the assessment response, the sub-grouping of related questions being related to a particular selected question of the at least one selected question, wherein the assessment response indicates that further related project initiation data is required relating to the particular selected question and the sub-grouping of related questions is identified to prompt the collected of the further related project initiation data; and updating with the sub- grouping of questions; (claims 4 and 11) wherein the document content comprises text; (claims 5 and 12) wherein the document content comprises image data; (claims 6, 13 and 18) identifying at least two candidate systems; and selecting from the at least two candidate systems; (claims 7, 14 and 19) receiving a persona selection request comprising the selected plurality of personas, but these only serve to further limit the abstract idea. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation of certain methods of organizing human activity and mental processes, but for the recitation of generic computer components, the claims recite an abstract idea.
Step 2A: Prong Two
This judicial exception is not integrated into a practical application because the claims merely describe how to generally “apply” the abstract idea. In particular, the claims only recite the additional elements – (claim 1) computer, ML-assisted, user interface, user device, virtual representative, Large Language Model (LLM) system, processor; (claims 2, 9 and 16) database; (claims 5 and 12) multi-modal model; (claim 8) memory: (claim 15) computer program product comprising non-transitory computer readable medium. These additional elements are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. Simply implementing the abstract idea on generic computer components is not a practical application of the abstract idea, as it adds the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f). The limitations generally link the abstract idea to a particular technological environment or field of use (such as computing or machine learning, see MPEP 2106.05(h)). The “generating project initiation using a plurality of personas and a corresponding plurality of questions”; “transmitting, project initiation data to include at least one representative, each representative corresponding to one of the personas in the plurality of personas and each representative associated with at least one associated question from the plurality of questions; “receiving collected project initiation data from a user, wherein the collected project initiation data is received subsequent to presenting a question prompt to the user, wherein the question prompt includes a particular representative and a particular associated question associated with that particular representative”; “transmitting an assessment prompt, wherein the assessment prompt includes at least one selected question from the plurality of questions and the collected project initiation data received from the user”; :receiving, an assessment response in response to the assessment prompt, wherein the assessment response indicates whether the collected project initiation data is responsive to the at least one selected question; “updating with at least one unanswered question from the plurality of questions based on the assessment response” limitations describe data gathering. The Office has long considered data gathering to be insignificant extra-solution activity. Merely adding insignificant extra-solution activity to an abstract idea does not integrate the exception into a practical application, see MPEP 2106.05(g)). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide generic computer implementation and do not impose a meaningful limit to integrate the abstract idea into a practical application.
Step 2B:
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to discussion of integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply an exception and generally link the abstract idea to a particular technological environment or field of use. With respect to the computer components, these limitations are described in Applicant’s own specification as generic and conventional elements. See at least Applicants specification, Figure 1 and para 0115, recites: “The processor unit 208 controls the operation of the server 200. The processor unit 208 can be any suitable processor(s), controller(s) or digital signal processor(s) that can provide sufficient processing power depending on the configuration, purposes and requirements of the server 200 as is known by those skilled in the art. For example, the processor unit 208 may be a high-performance general processor. Alternatively or in addition, the processor unit 208 can include more than one processor with each processor being configured to perform different dedicated tasks. Alternatively or in addition, the processor unit 208 may include a standard processor, such as an Intel® processor or an AMD® processor.”, para 0117, recites: “The memory unit 210 can include RAM, ROM, one or more hard drives, one or more flash drives or some other suitable data storage elements”, para 0119, recites: “The I/O unit 212 can include at least one of a mouse, a keyboard, a touch screen, a thumbwheel, a trackpad, a trackball, a card-reader, an audio source, a microphone, voice recognition software and the like again depending on the particular implementation of the server 200. Optionally, some of these components can be integrated with one another. Optionally, the I/O unit 212 may be omitted”. The specification spells out different generic equipment that might be applied using the concept and the particular steps such conventional processing would entail based on the concept of information access. Thus, the claims at issue amount to nothing significantly more than instructions to apply the abstract idea using some unspecified, generic computers. The use of such generic computers to receive or transmit data over a network has been identified as well understood, routine and conventional activity by the courts.
With respect to machine learning, the specification recites a list of known machine learning. See for example Applicant’s specification, para 0099-0111, lists: “GPT-2 made by OpenAl®; GPT-3 made by OpenAl® ; GPT-Neo made by EleutherAl® ; GPT-J made by EleutherAl®; Ernie 3.0 Titan made by Baidu®; Claude made by Anthropic®; LaMDA (Language Models for Dialog Applications) made by Google®; GPT-NeoX made by EleutherAl®; PaLM (Pathways Language Model) made by Google®; LLaMA (Large Language Model Meta Al) made by Meta ©; GPT-4 made by OpenAl®; PaLM 2(Pathways Language Model 2) Google®; Llama 2 made by Meta®”. With respect to the “generating project initiation using a plurality of personas and a corresponding plurality of questions”; “transmitting, project initiation data to include at least one representative, each representative corresponding to one of the personas in the plurality of personas and each representative associated with at least one associated question from the plurality of questions; “receiving collected project initiation data from a user, wherein the collected project initiation data is received subsequent to presenting a question prompt to the user, wherein the question prompt includes a particular representative and a particular associated question associated with that particular representative”; “transmitting an assessment prompt, wherein the assessment prompt includes at least one selected question from the plurality of questions and the collected project initiation data received from the user”; :receiving, an assessment response in response to the assessment prompt, wherein the assessment response indicates whether the collected project initiation data is responsive to the at least one selected question; “updating with at least one unanswered question from the plurality of questions based on the assessment response” limitations, which amounts to mere data gathering or merely add insignificant extra-solution activity to the abstract idea, see MPEP 2106.05(d). The legal precedent in Symantec, TLI and OIP Techs court decisions cited in MPEP 2106.05(d)(II) indicated that receipt and transmission of information over a computer network are a well-understood, routine, and conventional functions when claimed in a generic manner, as is the case here. See also Trading Techs. Int’l, Inc. v. IBG LLC, 921 F.3d 1084, 1093 (Fed. Cir. 2019) (data gathering and displaying are well-understood, routine, and conventional activities). Furthermore, claims 1-19 have been fully analyzed to determine whether there are additional elements recited that amount to significantly more than the abstract idea. The limitations fail to include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. Thus, nothing in the claim adds significantly more to the abstract idea. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. The claims are ineligible. Therefore, since there are no limitations in the claim that transform the exception into a patent eligible application such that the claim amounts to significantly more than the exception itself, the claims are rejected under 35 USC 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
Claims 1-19 are rejected under 35 U.S.C. 103 as being unpatentable over Selka et. al. (US 2022/0343233 A1), hereinafter “Selka”, over AU 2019/204285 A1, hereinafter “AU”, over Cook (US 2024/0126794 A1), hereinafter “Cook”.
Regarding Claim 1, Selka teaches A computer-implemented method for generating an ML-assisted project initiation user interface using a plurality of personas and a corresponding plurality of questions, comprising: (Selka, para 0019, teaches GUI; para 0027, teaches Machine learning a field of artificial intelligence (AI) that keeps a computer's built-in algorithms current; Abstract, teaches a business method solution enables success through collaboration and data competency resulting in project-ready blueprints (Examiner notes project initiation), an input synthesizer, and an embedded conversational chat bot (Selka, para 0121, teaching different chatbots)... enabling collaboration within the business team...conversational chatbot engages interdisciplinary teams, results in the transfer of knowledge ... and provides questions), transmitting, to a user device, ML-assisted project initiation user interface data usable at the user device to render a ML-assisted project initiation user interface, the ML-assisted project initiation user interface defined to include at least one virtual representative, each virtual representative corresponding to one of the personas in the plurality of personas and each virtual representative associated with at least one associated question from the plurality of questions; (Selka, para 0019, teaches GUI; para 0027, teaches Machine learning a field of artificial intelligence (AI) that keeps a computer's built-in algorithms current; Abstract, teaches a business method solution through collaboration and data competency resulting in project-ready blueprints (project initiation), and an embedded conversational chat bot; Questions and answers with chatbot (virtual representative; ee at least Selka, para 0121, teaching different chatbots) is taught throughout Selka, see at least para 0121-0122);
receiving collected project initiation data from a user at the user device through the ML- assisted project initiation user interface, wherein the collected project initiation data is received subsequent to presenting a question prompt to the user through the ML-assisted project initiation user interface, wherein the question prompt includes a particular virtual representative and a particular associated question associated with that particular virtual representative; (Selka, para 0019, teaches GUI; para 0027, teaches Machine learning; Selka, Abstract, teaches a conversational chat bot; Selka, para 0079, teaches project management workflows with guided questions; Selka, para 0084, A Projects Dashboard; Business Requirements are entered ... inputs are provided by users; Further, see Selka, para 0097, FIG. 7 illustrates the Business Requirements Page which serves like a whiteboard where teams can consolidate their responses and drag and drop them into individual business requirement sections);
... project initiation data received from the user; ... the collected project initiation data (Selka, Abstract, teaches a business method solution enables success through collaboration and data competency resulting in project-ready blueprints (Examiner notes project initiation));
updating, at the processor, the ML-assisted project initiation user interface with at least ... (Selka, para 0019, teaches GUI; para 0027, teaches Machine learning a field of artificial intelligence (AI); See at least Selka, para 0035, teaches processor; para 0117, teaches updates).
Yet, Selka does not appear to explicitly teach and in the same field of endeavor AU teaches transmitting an assessment prompt to a ... system, wherein the assessment prompt includes at least one selected question from the plurality of questions and the collected ... receiving, an assessment response from the ... system in response to the assessment prompt, wherein the assessment response indicates whether ... is responsive to the at least one selected question; ... one unanswered question from the plurality of questions based on the assessment response (AU, para 0026, Chatbot analytics may allow a chatbot ability to take a wealth of information from a variety of data sources and help monitor or spot potential flaws or problems. Chatbot analytics may also help improve human-machine interaction and overall user experience; AU, para 0084, Figure 8B illustrate a screen for a dashboard graphical user interface (GUI) an artificial intelligence (AI) based communications system.... The dashboard may provide real time insights of the effectiveness of the artificial intelligence (AI) based communications system. As shown, a number of metrics, analytics, analysis, and/or options may be provided by the dashboard... a total number of queries, each of them in detail if desired... all the liked or disliked responses, as well as any unanswered queries.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Selka with transmitting an assessment prompt to a ... system, wherein the assessment prompt includes at least one selected question from the plurality of questions and the collected ... receiving, an assessment response from the ... system in response to the assessment prompt, wherein the assessment response indicates whether ... is responsive to the at least one selected question; ... one unanswered question from the plurality of questions based on the assessment response as taught by AU with the motivation for creating and managing an artificial conversational entity using an artificial intelligence (Al) based communications system (AU, Abstract).
While Selka teaches Natural language processing (NLP) is a subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyze large amounts of natural language data (See at least Selka, para 0033) and AU teaches a natural language processing (NLP) AI-based communications system to provide human-like conversations and understanding (See at least AU, para 0047), Selka and AU do not appear to explicitly teach and in the same field of endeavor Cook teaches Large Language Model (LLM) ... LLM (Cook, Abstract, See at least Cook, para 0055, teaches a digital assistant may include a large language model (LLM). A “large language model,” as used herein, is a deep learning algorithm that can recognize, summarize, translate, predict and/or generate text and other content based on knowledge gained from massive datasets. Large language model may be trained on large sets of data; for example, training sets may include greater than 1 million words. Training sets may be drawn from diverse sets of data such as, as non-limiting examples, novels, blog posts, articles, emails, user dataset and the like.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Selka with Large Language Model (LLM) ... LLM as taught by Cook with the motivation for generating a digital assistant (Cook, Abstract). The Selka invention now incorporating the AU and cook inventions, has all the limitations of claim 1.
Regarding Claim 2, Selka, now incorporating AU and Cook, teaches The method of claim 1, further comprising: selecting the plurality of questions from the persona database based on a selected set of personas and a priority metric associated with each question in the plurality of questions (Selka, para 0121, teaches different chatbots; para 0122, teaches general chatbots answer role specific questions; para 0125, teaches specific chatbots to show use case information; Selka, para 0081, teaches questions are stored in a database... The questions are rendered for all phases in a pre-defined manner, with an option to configure based on types of industries and organization needs; Selka, para 0083, teaches highest-level bucket in which there are 4 phases. Each Phase is made up of several different activities each trying to solve a part of the problem. Each Activity is made up of several different techniques. This is the lowest level bucket of questions (Examiner notes low priority metric)).
Regarding Claim 3, Selka, now incorporating AU and Cook, teaches The method of claim 2, further comprising: identifying, at the processor, a sub-grouping of related questions based on the assessment response, the sub-grouping of related questions being related to a particular selected question of the at least one selected question, wherein the assessment response indicates that further related project initiation data is required relating to the particular selected question and the sub-grouping of related questions is identified to prompt the collected of the further related project initiation data; and (Selka, Abstract, teaches a business method solution enables success through collaboration and data competency resulting in project-ready blueprints (Examiner notes project initiation); Cook, para 0044, teaches A digital assistant may be designed to respond to inquiries about a large dataset is a sophisticated AI-powered system capable of understanding, processing, and providing meaningful responses to a wide range of questions or queries related to the dataset... It understands follow-up questions (Examiner notes sub-grouping of related questions) and maintains a contextually aware conversation); updating, at the processor, the ML-assisted project initiation user interface with the sub- grouping of questions (Selka, para 0019, teaches GUI; para 0027, teaches Machine learning a field of artificial intelligence (AI); See at least Selka, para 0035, teaches processor; para 0117, teaches updates; Cook, para 0044, teaches A digital assistant understands follow-up questions (Examiner notes sub-grouping of related questions) and maintains a contextually aware conversation).
Regarding Claim 4, Selka, now incorporating AU and Cook, teaches The method of claim 1 wherein the document content comprises text (See at least Selka, para 0022 and 0033, teaching extract information and insights contained in the documents).
Regarding Claim 5, Selka, now incorporating AU and Cook, teaches The method of claim 4 wherein the document content comprises image data and the Large Language Model (LLM) system comprises a multi-modal model (See at least Selka, Figure 10, teaching upload image; Cook, para 0074 and 0086, teaches multimodal data and LLM (Cook, para 0055 and 0154); Further, AU also teaches multimodal data, see at least para 0085-0086).
Regarding Claim 6, Selka, now incorporating AU and Cook, teaches The method of claim 1 further comprising: identifying at least two candidate Large Language Model (LLM) systems; and selecting the Large Language Model (LLM) system from the at least two candidate LLM systems (Cook, para 0055 and 0154, teaches a large language model (LLM). A “large language model,” as used herein, is a deep learning algorithm that can recognize, summarize, translate, predict and/or generate text and other content based on knowledge gained from massive datasets; Selka, para 0033, teaches Natural language processing (NLP) is a subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyze large amounts of natural language data; Selka, Figure 2, teaches model selection).
Regarding Claim 7, Selka, now incorporating AU and Cook, teaches The method of claim 1 further comprising: receiving, from the user device, a persona selection request comprising the selected plurality of personas (Selka, para 0121, teaches different chatbots; para 0122, teaches General chatbots answer role specific questions; para 0125, teaches specific chatbots to show use case information; Examiner notes chatbot is selected based on specific questions; Further, See at least Cook, para 0045, teaching personalizing the digital assistants; Even Further, AU, para 0024, teaches chatbot personality).
Regarding Claims 8 and 15, the claims are an obvious variant to claim 1 above, and are therefore rejected on the same premise. Selka teaches a computer-implemented system, a memory and a processor (See at least Selka, Abstract, teaches a business method and software solution; Selka, para 0035, teaches processor, Selka, para 0135, teaches memory). AU, para 0045, teaches an application may include software included of machine-readable instructions stored on a non-transitory computer readable medium and executable by a processor. Cook, para 0157, teaches software may be a computer program product.
Regarding claims 9 and 16, the claims recite analogous limitations to claim 2 above, and are therefore rejected on the same premise.
Regarding claims 10 and 17, the claims recite analogous limitations to claim 3 above, and are therefore rejected on the same premise.
Regarding claim 11, the claim recites analogous limitations to claim 4 above, and is therefore rejected on the same premise.
Regarding claim 12, the claim recites analogous limitations to claim 5 above, and is therefore rejected on the same premise.
Regarding claims 13 and 18, the claims recite analogous limitations to claim 6 above, and are therefore rejected on the same premise.
Regarding claims 14 and 19, the claims recite analogous limitations to claim 7 above, and are therefore rejected on the same premise.
Additional Prior Art Consulted
The prior art made of record and not relied upon which is considered pertinent to applicant’s disclosure includes the following:
Balu US 2023/0359999 A1 – A method and a cloud-based collaborative system for user-facing project management is disclosed, wherein the implementation method facilitates vendors, one or more users, and one or more third-party entities to communicate and collaborate for tracking and managing projects and implementation. Specifically the system allows each stakeholder to connect their tools for seamless communication, collaboration, and productivity.
Birru et al. US 2024/0095491 A1 - With the advent of AI and Natural Language Processing (NLP), some advancements have been made in virtual agent technology. Large Language Models (LLMs), such as GPT-3 and GPT-4, have demonstrated impressive capabilities in understanding and generating human-like text. These models have been integrated into virtual agents, allowing them to provide more contextually relevant responses to text-based queries.
Castillo et al. US 2025/0028759 A1 – teaches large language models, virtual agents
Godwin US 2016/0342928 A1 - A compliance activity information management and control system and method distributes, collates and tracks automated assurance questions and answers directly from key stakeholders to enable faster decision making, higher quality results and lower delivery costs. Additional features include a question-handling system, an exception reporting system and a social community area. The system also provides a graphical representation in a form of a dashboard on the graphical user interface to assist the management of the facility in managing the facility. Function-specific content, incorporates client and industry best practices.
Mielke et al. US 2023/0135179 – systems and methods for implementing smart assistant systems including a large language model (e.g., like GPT-3) as a chat bot/user simulator to perform QA test on assistant updates.
Ott US 2023/0316104 - An analytics computing device is provided. The analytics computing device may include a processor in communication with a memory. The processor may (1) store, in the memory, a plurality of documents in association with a case identifier; (2) electronically extract content data from the plurality of documents using a semantic analysis engine; (3) generate a case record in the memory including the extracted content data associated with the case identifier, the case record having a predefined data format; (4) execute a machine learning model configured to output a predicted value amount by inputting at least a portion of the extracted content data included in the case record into the machine learning model, the machine learning model trained using a plurality of historical case records and a plurality of historical value amounts; and/or (5) cause the predicted value amount outputted by the machine learning model to be displayed.
Park US 2024/0419656 - Network infrastructure for user-specific generative intelligence. Providing user-specific context to a generically trained LLM introduces a variety of complications (privacy, resource utilization, training costs, etc.). Various aspects of the present disclosure provide novel user-specific data structures, privacy and access control, layers of data, and session management, within a network infrastructure for generative intelligence. For example, user-specific embedding vectors may be used to provide user context to a generically trained foundation model. In some variants, edge devices capture multiple modalities of user context (images, audio; not just text). Privacy and access control mechanisms also allow a user to control information that is captured and sent to the foundation model. Session management further decouples a user's conversational state from the foundation model's session state. These concepts and others may be used to emulate e.g., a chatbot based virtual assistant that responds based on user context.
NPL - R. Mukhamadiev; N. Staroverova; M. Shustrova, “Specifics of Project Management System Development for Large Organizations”, Publisher IEEE, Published October 2020; https://ieeexplore.ieee.org/document/9271201 - Published in: 2020 International Multi-Conference on Industrial Engineering and Modern Technologies - The relevance of the work arises from the fact that it is difficult to imagine the functioning of large organizations without the use of information systems. The article reviews the process of developing an effective project management system, based on all the requirements of such systems.
Applicant is advised to review additional references supplied on the PTO-892 as to the state of the art of the invention.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to REBECCA R NOVAK whose telephone number is (571)272-2524. The examiner can normally be reached Monday - Friday 8:30am - 5:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynda Jasmin can be reached on (571) 272-6782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/R.R.N./Examiner, Art Unit 3629/NATHAN C UBER/Supervisory Patent Examiner, Art Unit 3626