DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/22/25 has been entered.
Response to Amendment and Arguments
35 U.S.C. 101 Rejections
Applicant’s amendment and argument have been fully considered. The amendment and argument are deemed persuasive; therefore, the rejection will be withdrawn.
35 U.S.C. 102/103 Rejections
Applicant’s amendments and arguments are considered but are either unpersuasive or moot in view of the new grounds of rejection that, if presented, were necessitated by the amendments to the Claims.
Applicant’s arguments are directed to material that is added by the most recent amendments to the Claims. Response, p. 18.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-7, 11, 14-15, 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Vajjiparti in view of Kotaru (US 20240330589), and further in view of Zhao (US 20240377932).
Regarding claim 1, Vajjiparti discloses: 1. A computer-implemented method comprising: ([0022] Example methods, devices, and systems are described herein.)
defining a dynamic experience data content prompt for a large language model that comprise data fields for at least one of response engagement factors, ([0180] To that point, FIG. 9B depicts how user feedback for the most populated clusters from clusters 822 can be summarized through use of language model 910. Here, language model 910 is assumed to be an LLM, but other types of language models could be used. In particular, language model 910 can be given prompt 912 and will produce summary 914. Likewise, language model 910 can be given prompt 916 and will produce summary 918. [0203] In some implementations, the summarization model includes a transformer-based large language model that is prompted with a request to summarize the plurality of textual user feedback. ([0003] Second, this feedback is given to a supervised or unsupervised machine learning model that performs similarity determination, clustering, and/or sentiment analysis on the feedback to generate specific summarizations of the users' experience with the software and/or actionable steps that can be taken to improve the software. [0160] disclose using LLMs and GPT-4 and BERT. [0161] The above techniques can be used for sentiment analysis of a string of text. The result of the sentiment analysis could be a list of one or more sentiments found in the text (e.g., happy, angry, confused)) [the claim only requires one of the recited elements, sentiment analysis on the feedback reads on response engagement factor and the output format is a summary in this instance.]
PNG
media_image1.png
566
792
media_image1.png
Greyscale
receiving a text-based digital survey response from a respondent device; ([0004] Accordingly, a first example embodiment may involve receiving, via a user interface, a plurality of textual user feedback regarding operation of a software application;) Also see fig. 3, client devices (302).
generating, utilizing a large language model, a first output comprised a data packet from the dynamic experience data content prompt and the text-based digital survey response, wherein the data packet comprises comprises the data fields from the dynamic experience data content prompt populated with data from the text-based survey response; ([0144] Processed feedback responses 618 may be a repository that stores the result of applying trained machine learning model 630 to feedback responses 616. [0147] Thus, feedback configuration and data store 610 may transmit feedback responses 616 (with or without information from per-component questions 612 and/or per-question options 614) to trained machine learning model 630 and store the results in processed feedback responses 618. Trained machine learning model 630 may take the form of one or more artificial intelligence constructs, such as artificial neural networks, decision trees, supervised or unsupervised clustering techniques, transformer-based architectures, expert systems, and so on.) Also see fig. 5 and 6, where feedback response from survey or discovery are kept and provided to trained machine learning model (LLM) for prompt and generating a response. This portion of the data packet contains the survey response and the prompt, which reads on dynamic experience data content prompt and the text based digital survey response. [also per spec of the instant application, as discussed in the response section regarding the 101 rejection, this is not a generating step, but a data gathering step, see para 0044 and 0050 for instance where it mention the data packet is send, but no mentioning of generating it.] Also see fig. 9B, as it shows the prompt and the language model and the output of the first language model. Dynamic experience data content prompt is basically where the text feedback response from user is being input as part of the prompt to the language model, as see in fig. 9B.
and based on the executing at least one action with respect to the text-based digital survey response. ([0140] The embodiments herein provide a system for distributed feedback that can include several modules. A customizable feedback component for a user interface allows users to be queried for feedback on the application that they are currently using. The feedback may use free-form input and possibly fixed-scale input as well. This feedback may be given to a supervised or unsupervised machine learning model that performs similarity determination, clustering, or sentiment analysis on the feedback to generate specific summarizations of the users' experience with the software and/or actionable steps that can be taken to improve the software.)
Vajjparti does not explicitly disclose data packet comprises a header that labels a source
Kotaru (in the related field of using AI in question and answer interface) discloses: wherein the structured data packet comprises a header that labels a source ([0059] To enable such prompts, the text extracted from each of the specification documents is appended with metadata with ‘source’ assigned to the identifier of the technical specification document. [0063] Further, each of the text sample's metadata is refined such that the ‘source’ variable is assigned a string comprising the identifier of the technical specification and the section title. This provides additional context and helps to ensure that the resulting data is properly attributed to the correct source.)[metadata of the source reads on header that labels a source, the data packet is also considered structured because metadata act as structure that defines where the data originated from, as the original source material might be unstructured text (like a PDF or Word document), the process transforms them into structured data. By associating the text with specific, defined metadata fields (text, metadata, source), the information is organized into a predictable format that can be easily parsed, searched, and processed by software applications. Also see table 2 and 3 and para 0070 describe prompt template.]
Vajjiparti and Kotaru are considered analogous art. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Vajjiparti to combine the teaching of Kotaru to incorporate the above mentioned feature, because the additional context helps ensure the resulting data is properly attributed to the correct source (Kotaru, [0063]).
Vajjiparti and Kotaru does not explicitly discloses the following features.
Zhao (in the related field of using LLM to assist user with text and content) discloses: defining an additional prompt of the large language model that references the first output; ([0075] In an example, in response to receiving an indication that one of outputs 134A, 134B, and 134C were selected, a further prompt may be displayed. The further prompt may be operable to generate a further output based on the first output and the second prompt as input to the language model.)
generating, utilizing the large language model with the first output and the additional prompt, ([0075] In an example, in response to receiving an indication that one of outputs 134A, 134B, and 134C were selected, a further prompt may be displayed. The further prompt may be operable to generate a further output based on the first output and the second prompt as input to the language model.)
a second output from the large language model; ([0075] The steps of selecting content with the context selector 214, providing a prompt with the prompt generator 216, generating an output with the language model 218, and displaying the output with one of the re-write module 220 or explanation module 222, may be repeated as many times as desired to help a user further iterate or refine an output from the language model 218.)
Vajjiparti, Kotaru and Zhao are considered analogous art. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Vajjiparti and Kotaru to combine the teaching of Zhao to incorporate the above mentioned feature, because further additional prompting and generating of response based on the first response may improve the quality of output response from the LLM (Zhao, [0075]).
Regarding Claim 3, Vajjparti in view of Kotaru and Zhao discloses: 3. The computer-implemented method as recited in claim 1,
Vajjparti further discloses: wherein the text-based digital survey response comprises one of feedback response ([0004] Accordingly, a first example embodiment may involve receiving, via a user interface, a plurality of textual user feedback regarding operation of a software application;) [the claim only requires one of the feature]
Regarding Claim 4, Vajjparti in view of Kotaru and Zhao discloses: 4. The computer-implemented method as recited in claim 1,
Vajjparti further discloses: wherein defining the dynamic experience data content prompt for the large language model comprises indicating a question type associated with the survey response. ([0003] The feedback may use free-form input and possible fixed-scale input as well.)
Regarding Claim 5, Vajjparti in view of Kotaru and Zhao discloses: 5. The computer-implemented method as recited in claim 1,
Vajjparti further discloses: wherein defining the dynamic experience data content prompt for the large language model comprises indicating instructions to the large language model to determine the response engagement factors representing at least one of a sentiment of the text-based digital survey response, ([0003] Second, this feedback is given to a supervised or unsupervised machine learning model that performs similarity determination, clustering, and/or sentiment analysis on the feedback to generate specific summarizations of the users' experience with the software and/or actionable steps that can be taken to improve the software. [0160] disclose using LLMs and GPT-4 and BERT. [0161] The above techniques can be used for sentiment analysis of a string of text. The result of the sentiment analysis could be a list of one or more sentiments found in the text (e.g., happy, angry, confused)) [The claim only requires one of the features listed]
Regarding Claim 6, Vajjparti in view of Kotaru and Zhao discloses: 6. The computer-implemented method as recited in claim 1,
Vajjparti further discloses: wherein the text-based digital survey response comprises responses from a plurality of respondent devices. (See fig. 5, plurality of device where queries and responses are transmitted to the server.) Also see fig. 3, client devices.
Regarding Claim 7, Vajjparti in view of Kotaru and Zhao discloses: 7. The computer-implemented method as recited in claim 1,
Vajjiparti further discloses: receiving from the large language model a determined category of the text-based digital survey response based on the defined experience data content categories. ([0161] The above techniques can be used for sentiment analysis of a string of text. The result of the sentiment analysis could be a list of one or more sentiments found in the text (e.g., happy, angry, confused) … classify text into one or more categories, such as positive, negative, or neutral categories.)
Regarding Claim 11, Vajjparti discloses: 11. A non-transitory computer-readable medium storing instructions that, when executed by at least one processor, cause a computer device to: ([0048] Memory 104 may store program instructions and/or data on which program instructions may operate. By way of example, memory 104 may store these program instructions on a non-transitory, computer-readable medium, such that the instructions are executable by processor 102 to carry out any of the methods, processes, or operations disclosed in this specification or the accompanying drawings.)
As for the rest of the claim, the recite the elements of Claim 1, and therefore the rationale applied in the rejection of Claim 1 is equally applicable.
Claim 14 is a non-transitory CRM claim with limitations similar to the limitations of Claim 5 and is rejected under similar rationale.
Claim 15 is a non-transitory CRM claim with limitations similar to the limitations of Claim 7 and is rejected under similar rationale.
Regarding Claim 17, Vajjiparti discloses:17. A system comprising: at least one processor; and at least one non-transitory computer-readable storage medium storing instructions that, when executed by the at least one processor, cause the system to: ([0005] A second example embodiment may involve a non-transitory computer-readable medium, having stored thereon program instructions that, upon execution by a computing system, cause the computing system to perform operations in accordance with the first and/or second example embodiment. ([0048] Memory 104 may store program instructions and/or data on which program instructions may operate. By way of example, memory 104 may store these program instructions on a non-transitory, computer-readable medium, such that the instructions are executable by processor 102 to carry out any of the methods, processes, or operations disclosed in this specification or the accompanying drawings.)
As for the rest of the claim, the recite the elements of Claim 1, and therefore the rationale applied in the rejection of Claim 1 is equally applicable.
Claim 20 is a system claim with limitations similar to the limitations of Claim 5 and is rejected under similar rationale.
Claims 2, 12 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Vajjiparti in view of Kotaru and Zhao, and furthermore in view of Smargon (US 20120226743).
Regarding claim 2, Vajjiparti in view of Kotaur and Zhao discloses: 2. The computer-implemented method as recited in claim 1,
Zhao further discloses: wherein: receiving the second output from the large language model comprises receiving at least one of (see fig. 6A, display the second output)
Where the motivation for the combination would be similar to the one previously provided.
Vajjiparti in view of Kotaru and Zhao does not explicitly disclose the below feature, but Smargon (in the related field of providing coupons to qualify respondents of survey) discloses: wherein: receiving the second output from the large language model comprises receiving a coupon; ([0037] The survey server can also send various notifications to survey respondents. For example, respondents may be notified when they earn a reward for responding to a survey, when an unredeemed coupon they have earned is about to expire)
and executing the at least one action further comprises routing the second output to a relevant segment of an entity or ([0066] When the respondent has qualified for a coupon, the coupon can be sent to the inbox of the respondent or the respondent may receive the coupon via email or other electronic communication. When the respondent receives the coupon, it could be opened or closed. Open coupons have identifiers that are visible to the respondent. The identifiers of closed coupons are not visible to the respondent. If the coupon is closed, the respondent can send a request to the survey server to open the coupon, after which the respondent would then be presented with a corresponding opened coupon.)
Vajjiparti, Kotaru, Zhao and Smargon are considered analogous art. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of teachings to combine with the teaching of Smargon, because coupons or other rewards would encourage more qualify individual to participate in surveys (Smargon, [Abstract]).
Claim 12 is a non-transitory CRM claim with limitations similar to the limitations of Claim 2 and is rejected under similar rationale.
Claim 18 is a system claim with limitations similar to the limitations of Claim 2 and is rejected under similar rationale.
Claims 8-9, 13 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Vajjiparti in view of Kotaru and Zhao, and furthermore in view of Morningstar (US 20200342470).
Regarding claim 8, Vajjiparti in view of Kotaru and Zhao discloses: 8. The computer-implemented method as recited in claim 1,
Vajjiparti in view of Kotaru and Zhao does not explicitly disclose the below feature, but Morningstar discloses: wherein executing the at least one action comprises, based on the first output, determining an administrator client device to which to send the first output. ([0127] the topic tracking system 106 generates and/or distributes one or more notifications based on a topic variance based on one or more preferences (or settings) provided by the administrator device 110. For example, the topic tracking system 106 can receive preferences (or settings) that specify rules for generating and/or distributing a notification based on the type of topic variance, a category of topics, and/or the information associated with the topic variance. For instance, the topic tracking system 106 can receive preferences to provide an email message to all administrator devices associated with the one or more digital surveys associated with the identified topic variance when the topic variance is a change in sentiment value or an emerging topic. Furthermore, as an example, the topic tracking system 106 can receive preferences to provide a notification within a user interface for the digital survey when the topic variance is a downward trend in mentions of a topic. Furthermore, in some embodiments, the topic tracking system 106 receives preferences to provide a notification for a topic variance for a specific topic to a specific administrator device (e.g., provide all notifications for identified topic variances related to the topic of “shipping” to an administrator device associated with a shipping manager).)
Vajjiparti Kotaru Zhao and Morningstar are considered analogous art. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of teachings to combine with the teaching of Morningstar, because described system provides of relevance, prioritization and efficiency by ensuring relevant stakeholder receiving alert by filtering to specific areas of responsibilities to the proper management (Morningstar, [0127]).
Regarding Claim 9, Vajjiparti in view of Kotaru and Zhao discloses: 9. The computer-implemented method as recited in claim 8,
Morningstar further discloses: further comprising: selecting the administrator client device from a set of administrator devices based on the response engagement factors and the experience data content categories. ([0127] For example, the topic tracking system 106 can receive preferences (or settings) that specify rules for generating and/or distributing a notification based on the type of topic variance, a category of topics, and/or the information associated with the topic variance. when the topic variance is a change in sentiment value … Furthermore, in some embodiments, the topic tracking system 106 receives preferences to provide a notification for a topic variance for a specific topic to a specific administrator device (e.g., provide all notifications for identified topic variances related to the topic of “shipping” to an administrator device associated with a shipping manager).)
Where the rationale for the combination would be similar to the one already provided.
Claim 13 is a non-transitory CRM claim with limitations similar to the limitations of Claim 8 and is rejected under similar rationale.
Claim 19 is a system claim with limitations similar to the limitations of Claim 8 and is rejected under similar rationale.
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Vajjiparti in view of Kotaru and Zhao, and furthermore in view of Childress (US 20230410022).
Regarding Claim 16, Vajjiparti in view of Kotaru and Zhao discloses: 16. The non-transitory computer-readable medium of claim 11,
Vajjiparti in view of Kotaru and Zhao does not explicitly disclose the below feature, but Childress discloses: further comprising instructions that, when executed by the at least one processor, cause the computer device to generate the first output by: receiving a set of recommendations based on the text based digital survey response from the large language model; ([0104] the ML model(s) 1225 can generate the customized content 1240 using generative artificial intelligence (AI) content generation techniques, for instance by generating text using at least one LLM as part of the ML model(s) 1225,) … summaries of large amounts of survey responses, recommendations based on the input(s) 1205,
and providing the first output that comprises the set of recommendations to an administrator client device. ([0042] the responses may be sent to an administrator for review. [0065] A report generation 110 component may access survey results from survey management 108 or from electronic storage 122 in order to generate reports which may be reviewed by users via client computing platforms)
Vajjiparti, Kotaru, Zhao and Childress are considered analogous art. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of teachings to combine with the teaching of Childress, because the systems and techniques can provide customized, personalized, tailored insights, such as scores, follow-up actions, and/or customized content (Childress, [0006]).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Vajjiparti in view of Kotaru and Zhao, and furthermore in view of Brown (US 20150051976).
Regarding claim 10, Vajjiparti in view of Kotaru and Zhao discloses: 10. The computer-implemented method as recited in claim 1,
Vajjiparti further discloses: providing text-based digital survey response and the dynamic experience data content prompt; ([0203] In some implementations, the summarization model includes a transformer-based large language model that is prompted with a request to summarize the plurality of textual user feedback.)
and generating the first output from the large language model based on dynamic experience data content prompt, and the text-based digital survey response. ([0203] In some implementations, the summarization model includes a transformer-based large language model that is prompted with a request to summarize the plurality of textual user feedback.)
Vajjiparti in view of Kotaru and Zhao does not explicitly disclose the below feature, but Baxter discloses: further comprising: identifying contextual data relating to the respondent device; ([0049] In addition to the various advantages of the mobile device detection and identification systems and methods discussed above, additional advantages may include providing an improved customer service experience. For instance, by detecting a mobile device and identifying a user of the device, user information may be accessed that may aid in providing a more tailored customer service experience for the user. For instance, understanding the user's spending patterns or history may aid in providing more desirable product or service offerings to the user)
Vajjiparti, Kotaru, Zhao and Brown are considered analogous art. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of teachings to combine with the teaching of Brown, because understanding the user's patterns or history may aid in providing more desirable product or service offerings to the user (Brown, [0049]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Robbins (US 20200342470)- discloses “A survey system according to various embodiments of the invention is adapted to: (1) generate and, optionally, distribute a survey; (2) receive the results of the survey; (3) process the results of the survey and, optionally, distribute the results of the survey to one or more managers within an organization; and (4) generate and, optionally, distribute an action planning survey to the one or more managers within the organization. In various embodiments, this action planning survey requires at least one manager to provide a recommended course of action for addressing one or more target areas, as well as a quantified estimate as to what effect the occurrence of the implementation of the course of action (e.g., the addition of 200 more parking spaces within a particular apartment building) would have on one or more measurable types of data (e.g., tenant satisfaction with parking, or tenant renewal rates).” See para Abstract, para 0028 and figs 3-7 for additional details.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Philip H Lam whose telephone number is (571)272-1721. The examiner can normally be reached 9 AM-4 PM Pacific time.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached on 571-272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PHILIP H LAM/ Examiner, Art Unit 2656