Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Status of the Application
The following is a Final Office Action in response to communication received on 2/18/2026. Claims 1-20 are pending in this office action.
Response to Amendment
Applicant’s amendments to claims 1, 9, and 14 are acknowledged.
Response to Arguments
On Remarks page 9, Applicant argues the claims do not recite an activity a human could practically perform. The examiner strongly disagrees. The claims recite collecting survey information, determining an emotion or sentiment based on the collected survey information, having a user confirm whether the determined sentiment or emotion is correct or incorrect, storing this collected and determined information, and then aggregating the information for display and visualization. These recite given the broad recitation in the claims limitations a human or humans could perform. Collecting users sentiments in surveys and aggregating and displaying those results for use by the users or other users is subject matter related to managing personal behavior or relationships or interactions between people as the claims recite social activities which are a certain methods of organizing human activities. The additional elements of this being performed on the argued device merely result in apply it or generally linking it to the field of computers, as detailed in the 101 rejection below. Further the claims do not require such limitations are performed in the argued “real time”, however even if they did (which the Examiner does not contend), such limitations of instead of performing the human activity steps or functions by pen and paper performing them in real time through use of a computing system or server (of claim 1) would merely result in apply it. Specifically here the claim invokes computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g. to receive, store or transmit data) or simply adding a general purpose computer or computer components after the face to an abstract idea does not integrate a judicial exception into a practical application or provide significantly more. Further this merely results in generally linking it to the field of computers.
On Remarks page 10, Applicant argues the 101 rejection and that the claims recite a practical application. Specifically Applicant argues “The amended claims recite an improvement to the functioning of a computer system itself. Specifically, the claims recite: (1) a real-time, pre-completion inference engine (the analyzer module) that processes survey data during the survey session to produce an inferred emotional sentiment; (2) a specific interactive verification interface with distinct confirmation and correction pathways, including selectable emotional states for correction; (3) a dual-repository data architecture that maintains a distinction between unverified and verified emotional sentiment data; and ( 4) a verification accuracy computation that compares inferred versus verified sentiments on a per-user basis to quantify the accuracy improvement achieved by the verification step, with the results displayed in a segmented summary panel.” The Examiner respectfully disagrees.
The claims are recited at such a high level of generality that they recite certain methods of organizing human activity with additional computer elements that merely result in apply it or generally linking it to the field of computers.
Specifically with respect to (1) the claim recites “analyze, via an analyzer module and prior to completion of the digital survey, the one or more responses including structured answer and sentiment indicators to generate an inferred emotional sentiment for each user.” These are human activities therefore part of the abstract idea. Specifically it is a human activity to analyze a survey after for example a few questions to determine emotional or sentiment, given the broad recitation in the claim. The additional elements that is broadly recited as being performed by software running on a computer (e.g. analyzer module) and the survey is digital merely results in apply it, as detailed in the 101 rejection below. Here it is noted that the additional element of the “analyzer module” recite a result oriented solution and lack details as to how the computer performs the modifications or the mechanism for accomplishing the desired result (e.g. the inferred emotional sentiment) which is equivalent to the words “apply it”.
With respect to (2) Applicant recites “transmit the inferred emotional sentiment to the one or more participant devices for presentation to the user together with a confirmation prompt, the confirmation prompt comprising a confirmation option and a correction option, the correction option enabling the user to select a revised emotional sentiment from a set of selectable emotional states”. These are human activities therefore part of the abstract idea. Specifically it is a human activity to prompt a user to confirm or verify a determination. The additional elements that is broadly recited as being performed by devices merely results in apply it and generally linking it to the field of computers, as detailed below in the 101 rejection.
With respect to (3) Applicant recites “record a verified emotional sentiment based on user confirmation or correction of the inferred emotional sentiment and store the verified emotional sentiment in a verified data repository distinct from an unverified data repository storing the one or more responses”. These are human activities therefore part of the abstract. Specifically it is a human activity to store various information in different places. There are no additional elements beyond those previously discussed above in this limitation.
With respect to (4)Applicant recites “allow filtering of the aggregated emotional sentiment data based on one or more parameters including topic, department, location, or employee demographics; and display visualizations showing sentiment distribution, group comparisons, and user- submitted commentary linked to aggregated emotional sentiment data; and display the verification accuracy data in a summary panel showing segmented proportions of users whose emotional state shifted positively, shifted negatively, or remained unchanged relative to the inferred emotional sentiment.” These are human activities and therefore part of the abstract. Specifically it is a human activity to collect data, aggregate it, filter it, and display segmented filtered and aggregated data according to distributions or specific constraints on a display. There are no additional elements beyond those discussed above. Therefore the Examiner respectfully disagrees.
On Remarks page 10, Applicant argues “This combination of features reflects a specific improvement to how computing systems collect and verify survey sentiment data. The specification explicitly describes the prior art process (FIG. 1) as a sequential, offline workflow where analysis occurs only after survey completion. The claimed system fundamentally changes this process by introducing a real-time inference-and verification feedback loop that produces more accurate sentiment data before it ever reaches the reporting layer. This is an improvement to the functioning of the computer system, not merely using a computer as a tool. See Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1336 (Fed. Cir 2016).
The Examiner respectfully disagrees. For one the Examiner does not find the argument improvement in Applicant’s specification, therefore the Examiner does not find the case similar to MPEP 2106.05(a) or the argued Enfish. The cited Figure 1 does not disclose this as an improvement. Further this is found in the abstract idea as broadly recited in the claim language. Specifically the claims are so broadly recited they recite human activities of specifically providing a survey with a few questions, checking answers and providing analysis like sentiment or feelings or emotions, and providing a few more questions in response to that predicted sentiment or feeling or emotions to confirm those predictions. The additional elements that this is being performed by software running on a computer (specifically a server with a processor, memory, and an analyzer module) and the surveys are being displayed on a device merely results in apply or generally linking it to the field of computers.
Therefore the Examiner respectfully disagrees.
On Remarks page 10-11, Applicant argues that the interface is not generic and cites Core Wireless and argues that the specific manner of displaying information improves the user interface. The Examiner respectfully disagrees. The present application is unlike Core Wireless, USPTO Example 37, and MPEP 2106.05(a), as the present specification is not directed towards improving the interface over prior art system of interfaces of displaying information. Instead the limitations of “an administrator interface configured to: display aggregated emotional sentiment data and comment summaries for one or more workplace groups; allow filtering of the aggregated emotional sentiment data based on one or more parameters including topic, department, location, or employee demographics; and display visualizations showing sentiment distribution, group comparisons, and user- submitted commentary linked to aggregated emotional sentiment data; and display the verification accuracy data in a summary panel showing segmented proportions of users whose emotional state shifted positively, shifted negatively, or remained unchanged relative to the inferred emotional sentiment” are so broadly recited that they recite human activities of filtering aggregated survey data, and displaying information according to filters, e.g. displaying positive survey respondents, negative survey respondents, changed survey respondents, etc. The only additional element that this display is an “interface” merely results in apply it or generally linking it to the field of computers as discussed below under the 101 rejection.
On Remarks page 11, Applicant argues the limitations are not well understood, routine, and conventional and that the Examiner has not identified any conventional system that performs this specific combinations of operations . The Examiner respectfully disagrees, as the Examiner provided reasonings why the additional element limitations were merely apply it or generally linking it to the field of computers (see MPEP 2106.05(f) and MPEP 2106.05(h), therefore there is no requirement of evidence of well understood, routine, and conventional activities in view of the Berkheimer Memo.
On Remarks pages 11-14, Applicant argues the prior art in view of Applicant’s amendments. Considering the combination of elements as now amended, the Examiner has applied a new grounds of rejection (e.g. newly cited prior art), rendering Applicant’s arguments with respect to the previously cited prior art moot.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claims 1-8 recite a machine as the claims recite a system with a device, a server, and an interface. Claims 9-13 recite a process as the claims recite a method. Claims 14-20 recite an article of manufacture as the claims recite a non-transitory computer readable mediums storing instructions being executed by one or more processors.
The claim(s) 1-20 recite(s) collecting survey information, determining an emotion or sentiment based on the collected survey information, having a user confirm whether the determined sentiment or emotion is correct or incorrect, storing this collected and determined information, and then aggregating the information for display and visualization.
The above that relates to collecting users sentiments in surveys and aggregating and displaying those results for use by the users or other users is subject matter related to managing personal behavior or relationships or interactions between people as the claims recite social activities which are a certain methods of organizing human activities.
Certain methods of organizing human activities are in the groupings of enumerated abstracts ideas, and hence the claims recite an abstract idea.
This judicial exception is not integrated into a practical application because the claims merely recite limitations that are not indicative of integration into a practical application in that the claims merely recite:
(1) Adding the words “apply it” ( or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)) and (2) Generally linking the use of the judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)). Specifically as recited in the claims:
Examiner notes that limitations bolded and underlined for distinction are considered additional elements. Limitations not bolded and underlined are considered a part of the abstract idea.
1. A computing system for collecting, verifying, and reporting data from survey participants, comprising:
one or more participant devices, each configured to:
present a digital survey comprising a plurality of questions related to a topic of interest;
receive one or more responses from a user, including selected answers and optionally written comments;
a server comprising one or more processors and memory for storing instructions that, when executed, cause the server to:
receive and store the one or more responses from the one or more participant devices;
analyze, via an analyzer module and prior to completion of the digital survey, the one or more responses including structured answer and sentiment indicators to generate an inferred emotional sentiment for each user;
transmit the inferred emotional sentiment to the one or more participant devices for presentation to the user together with a confirmation prompt, the confirmation prompt comprising a confirmation option and a correction option, the correction option enabling the user to select a revised emotional sentiment from a set of selectable emotional states;
record a verified emotional sentiment based on user confirmation or correction of the inferred emotional sentiment and store the verified emotional sentiment in a verified data repository distinct from an unverified data repository storing the one or more responses;
associate written comments with the verified emotional sentiment;
and aggregate the verified data for presentation in one or more reporting interfaces; and
compare, on a per-user basis, the inferred emotional sentiment with the verified emotional sentiment to generate verification accuracy data quantifying a proportion of users who verified emotional sentiment differed from the inferred emotional sentiment;
an administrator interface configured to:
display aggregated emotional sentiment data and comment summaries for one or more workplace groups;
allow filtering of the aggregated emotional sentiment data based on one or more parameters including topic, department, location, or employee demographics;
and display visualizations showing sentiment distribution, group comparisons, and user- submitted commentary linked to aggregated emotional sentiment data; and
display the verification accuracy data in a summary panel showing segmented proportions of users whose emotional state shifted positively, shifted negatively, or remained unchanged relative to the inferred emotional sentiment.
2. The computing system of claim 1, wherein the memory is a non-transitory computer-readable storage medium; and wherein the instructions stored on the non-transitory computer-readable storage medium further cause the server to present, to the each user, a confirmation prompt asking whether the inferred emotional sentiment accurately reflects how the user feels.
3. The computing system of claim 1, wherein the server is further configured to update the verified emotional sentiment based on a user-selected correction to the proposed emotional sentiment.
4. The computing system of claim 1, wherein each written comment in the comment summaries submitted by the user is tagged with a label corresponding to the verified emotional sentiment of the user at a time the comment was submitted.
5. The computing system of claim 1, wherein the administrator interface is further configured to display a distribution of emotional sentiment values across a plurality of workplace groups in a comparative format.
6. The computing system of claim 1, wherein the administrator interface is configured to filter sentiment results by at least one of a survey topic, a department, a location, an employee role, and a demographic attribute.
7. The computing system of claim 1, wherein the administrator interface further displays employee comments in a format grouped by associated emotional sentiment.
8. The computing system of claim 1, wherein the server is configured to track and display changes in emotional sentiment over time for individual users or defined workplace groups.
9. A method for collecting, verifying, and reporting emotional sentiment data from survey participants, comprising:
presenting, via one or more participant devices, a digital survey comprising a plurality of questions related to a topic of interest;
receiving, from a user, one or more responses to the digital survey, including selected answers and optionally written comments;
analyzing, via an analyzer module executing on a server and prior to completion of the digital survey, the one or more responses including structured answers and sentiment indicators to generate an inferred emotional sentiment for the user;
presenting the inferred emotional sentiment to the user and prompting the user to confirm or revise the inferred emotional sentiment by displaying a confirmation prompt comprising a confirmation option and a correction option, the correction option enabling the user to select a revised emotional sentiment from a set of selectable emotional states;
receiving, from the user, a verified emotional sentiment;
associating the verified emotional sentiment with the one or more responses and any written comments;
storing the verified emotional sentiment in association with a survey record of the user in a verified data repository distinct from an unverified data repository storing the one or more responses;
comparing, on a per-user basis, the inferred emotional sentiment with the verified emotional sentiment to generate verification accuracy data quantifying a proportion of users who verified emotional sentiment differed from the inferred emotional sentiment;
and displaying, via an administrator interface, one or more visualizations of aggregated emotional sentiment data and associated commentary for individual users or workplace groups; and
displaying, via the administrator interface, the verification accuracy data in a summary panel showing segmented proportions of users who emotional state shifted positively, shifted negatively, or remained unchanged relative to the inferred emotional sentiment.
10. The method of claim 9, further comprising presenting a confirmation prompt to the user asking whether the inferred emotional sentiment accurately reflects how the user feels.
11. The method of claim 9, further comprising: updating the verified emotional sentiment based on a user selection to revise the inferred emotional sentiment;
tagging each written comment submitted by the user with a label corresponding to the verified emotional sentiment;
and filtering the aggregated emotional sentiment data based on at least one of a survey topic, a department, a location, an employee role, and a demographic attribute.
12. The method of claim 9, wherein the one or more visualizations comprise a distribution of emotional sentiment values across a plurality of workplace groups.
13. The method of claim 9, further comprising:
grouping and displaying user-submitted comments according to the associated verified emotional sentiment;
and tracking changes in the associated verified emotional sentiment over time for individual users or predefined organizational groups.
14. A non-transitory computer-readable storage medium storing instructions that, when executed by one or more processors, cause a computing system to perform a method for collecting, verifying, and reporting emotional sentiment data, the method comprising:
presenting a digital survey to a user via a participant device, the digital survey comprising a plurality of questions related to a topic of interest;
receiving one or more survey responses from the user, including selected answers and optionally written comments;
analyzing, via an analyzer module and prior to completion of the digital survey, the responses including structured answers and sentiment indicators to generate an inferred emotional sentiment;
prompting the user to confirm or revise the inferred emotional sentiment by displaying a confirmation prompt comprising a confirmation option and a correction option, the correction option enabling the user to select a revised emotional sentiment from a set of selectable emotional states;
recording a verified emotional sentiment based on input from the user;
associating the verified emotional sentiment with the one or more survey responses and any written comments;
aggregating the verified emotional sentiment data from multiple users in a verified data repository district from an unverified data repository storing the one more survey responses;
comparing, on a per-user basis, the inferred emotional sentiment with the verified emotional sentiment to generate verification accuracy data quantifying a proportion of users who verified emotional sentiment differed from the inferred emotional sentiment;
and generating one or more visualizations of the aggregated verified emotional sentiment data for presentation through an administrator interface; and
generating a verification accuracy visualization displaying segmented proportions of users who emotional state shifted positively shifted negatively, or remained unchanged relative to the inferred emotional sentiment.
15. The non-transitory computer-readable storage medium of claim 14, wherein the instructions further cause the computing system to present a confirmation prompt asking the user to verify whether the inferred emotional sentiment accurately reflects how the user feels.
16. The non-transitory computer-readable storage medium of claim 14, wherein the instructions further cause the computing system to receive a revised emotional sentiment from the user and store the revised emotional sentiment as a verified emotional sentiment.
17. The non-transitory computer-readable storage medium of claim 14, wherein the instructions further cause the computing system to associate a label with each written comment submitted by the user, the label corresponding to the verified emotional sentiment.
18. The non-transitory computer-readable storage medium of claim 14, wherein the instructions further cause the computing system to filter the aggregated verified sentiment data based on at least one of: a topic, a department, an organizational unit, a location, and an employee demographic attribute.
19. The non-transitory computer-readable storage medium of claim 14, wherein the instructions further cause the computing system to generate a visualization comprising a distribution of emotional sentiment values across multiple workplace groups.
20. The non-transitory computer-readable storage medium of claim 14, wherein the instructions further cause the computing system to:
display user comments grouped by their associated verified emotional sentiment; and
track and display changes in verified emotional sentiment over time for individual users or defined organizational groups.
As per claim 1, Applicant recites activities a human or humans could perform given the broad recitation in the claims. Specifically a human or humans could present a survey with multiple questions, collect responses, make a determination based of an emotional sentiment based on responses, request a confirmation of the determination of emotional sentiment, aggregate the information and store in various data stores it for later use, and then provide the aggregated information to a user to display visualizations of the collected information by different parameters. The additional elements that these limitations that could be performed by a human or humans are instead recited as being performed by “devices”, the system is a “computing” system, “a server” (comprising a processor and memory and an analyzer module), “interfaces”, and the surveys are “digital” merely results in apply it. Specifically here the claim invokes computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g. to receive, store or transmit data) or simply adding a general purpose computer or computer components after the face to an abstract idea does not integrate a judicial exception into a practical application or provide significantly more. Further here it is noted that the additional amended element of the “analyzer module” recite a result oriented solution and lack details as to how the computer performs the modifications or the mechanism for accomplishing the desired result (e.g. the inferred emotional sentiment) which is equivalent to the words “apply it”.
Further limitations that could be performed by a human or humans that instead recite they are being performed by “devices”, the system is a “computing” system, “a server” (comprising a processor, memory and a module), “interfaces”, and the surveys are “digital” merely results in generally linking it to the field of computers.
As per claim 2, Applicant recites activities a human or humans could perform. Specifically a human or humans could present a question to a user to determine whether or not the determined emotion was correct as broadly recited in the claim. The additional element that this is being done by software running on a computer “the memory is a non-transitory computer readable storage medium; and wherein the instructions stored on the non-transitory computer readable storage medium further cause the server to” and the system is a computing system merely results in apply it or generally linking it to the field of computers as discussed above.
As per claim 3, Applicant recites activities a human or humans could perform. Specifically a human or humans could update the emotional sentiment based on a correction received as broadly recited in the claims. The additional element that this being done by a server and the system is a computing system merely results in apply it or generally linking it to the field of computers as discussed above in claim 1.
As per claim 4, Applicant recites activities a human or humans could perform. Specifically a human or humans could label or tag an emotional sentiment with information at the time it is submitted to the user. There are no additional elements beyond those discussed above in claim 1, like that of the system being a computing system.
As per claim 5, Applicant recites activities a human or humans could perform. Specifically a human or humans could display distributions of aggregated sentiment information in a comparative format. The additional element that this being done by an administrator interface and the system being a computing system merely results in apply it or generally linking it to the field of computers as discussed above in claim 1.
As per claim 6, Applicant recites activities a human or humans could perform. Specifically a human or humans could filter information according to different parameters or results like topic, department, location, etc. The additional that this being done by an administrator interface and the system being a computing system merely results in apply it or generally linking it to the field of computers as discussed above in claim 1.
As per claim 7, Applicant recites activities a human or humans could perform. Specifically a human or humans could specifically could display employee comments grouped by emotional sentiment. The additional element that this being done by an administrator interface and the system being a computing system merely results in apply it or generally linking it to the field of computers as discussed above in claim 1.
As per claim 8, Applicant recites activities a human or humans could perform. Specifically a human or humans could track and display changes in emotional sentiment over time for individual users or defined workplace groups. The additional element that this being done by a server and the system being a computing system merely results in apply it or generally linking it to the field of computers as discussed above in claim 1.
As per claim 9, Applicant recites activities a human or humans could perform. Specifically a human or humans could present a survey with multiple questions, collect responses, make a determination based of an emotional sentiment based on responses, request a confirmation of the determination of emotional sentiment, aggregate the information and store it in various data stores for later use, and then provide the aggregated information to a user to display visualizations of the collected information by different parameters. The additional elements that these limitations that could be performed by a human or humans are instead recited as being performed by “devices”, software running on a computer (specifically a server with a module), “interfaces”, and the surveys are “digital” merely results in apply it. Specifically here the claim invokes computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g. to receive, store or transmit data) or simply adding a general purpose computer or computer components after the face to an abstract idea does not integrate a judicial exception into a practical application or provide significantly more. Here it is noted that the additional element of the amended “analyzer module” recites a result oriented solution and lack details as to how the computer performs the modifications or the mechanism for accomplishing the desired result (e.g. the inferred emotional sentiment) which is equivalent to the words “apply it”.
Further limitations that could be performed by a human or humans that instead recite they are being performed by “devices”, “interfaces”, software running on a computer (specifically a server with a module), and the surveys are “digital” merely results in generally linking it to the field of computers.
As per claim 10, Applicant recites activities a human or humans could perform. Specifically a human or humans could present a question to a user to determine whether or not the determined emotion was correct as broadly recited in the claim. There are no additional elements beyond those discussed above.
As per claim 11, Applicant recites activities a human or humans could perform. Specifically a human or humans could update the emotional sentiment based on the revised selection, tag a written comment corresponding to the emotional sentiment and filter the aggregated sentiment according to parameters like topic, department, location, etc. There are no additional elements beyond those discussed above.
As per claim 12, Applicant recites activities a human or humans could perform. Specifically a human or humans could create visualizations that comprise a distribution of emotional sentiment values across a plurality of workplace groups. There are no additional elements beyond those discussed above.
As per claim 13, Applicant recites activities a human or humans could perform. Specifically a human or humans could group and display comments associated with verified emotional sentiment and track changes in the associated verified emotional sentiment over time for users or groups of organizational groups. There are no additional elements beyond those discussed above.
As per claim 14, Applicant recites activities a human or humans could perform. Specifically a human or humans could present a survey with multiple questions, collect responses, make a determination based of an emotional sentiment based on responses, request a confirmation of the determination of emotional sentiment, aggregate the information and store it in various data stores for later use, and then provide the aggregated information to a user to display visualizations of the collected information by different parameters. The additional elements that these limitations that could be performed by a human or humans are instead recited as being performed by software running on a computer “a non-transitory computer readable storage medium storing instructions that, when executed by one or more processors, cause a computing system to perform a method for collecting, verifying, and reporting emotional sentiment data, the method comprising”, software running on a computer (specifically a module), “devices”, “interfaces”, and the surveys are “digital” merely results in apply it. Specifically here the claim invokes computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g. to receive, store or transmit data) or simply adding a general purpose computer or computer components after the face to an abstract idea does not integrate a judicial exception into a practical application or provide significantly more. Here it is noted that the additional amended element of the “analyzer module” recite a result oriented solution and lack details as to how the computer performs the modifications or the mechanism for accomplishing the desired result (e.g. the inferred emotional sentiment) which is equivalent to the words “apply it”.
Further limitations that could be performed by humans that instead recite these limitations being performed by software running on a computer “a non-transitory computer readable storage medium storing instructions that, when executed by one or more processors, cause a computing system to perform a method for collecting, verifying, and reporting emotional sentiment data, the method comprising”, “devices”, “interfaces”, software running on a computer (specifically a module), and the surveys are “digital” merely results in generally linking it to the field of computers.
As per claim 15, Applicant recites activities a human or humans could perform. Specifically a human or humans could present a question to a user to determine whether or not the determined emotion was correct as broadly recited in the claim. The additional element that this is being done by software running on a computer “wherein the instructions further cause the computing system to” merely results in apply it or generally linking it to the field of computers as discussed above.
As per claim 16, Applicant recites activities a human or humans could perform. Specifically a human or humans could receive a revised emotional sentiment for the user and store the revised emotional sentiment. The additional element that this is being done by software running on a computer “wherein the instructions further cause the computing system to” merely results in apply it or generally linking it to the field of computers as discussed above.
As per claim 17, Applicant recites activities a human or humans could perform. Specifically a human or humans could label a comment with a sentiment or verified emotional sentiment. The additional element that this is being done by software running on a computer “wherein the instructions further cause the computing system to” merely results in apply it or generally linking it to the field of computers as discussed above.
As per claim 18, Applicant recites activities a human or humans could perform. Specifically a human or humans could filter the aggregated sentiment based on parameters like topic, department, organizational unit, etc. The additional element that this is being done by software running on a computer “wherein the instructions further cause the computing system to” merely results in apply it or generally linking it to the field of computers as discussed above.
As per claim 19, Applicant recites activities a human or humans could perform. Specifically a human or humans could generate a visualization comprising a distribution of emotional sentiment values across multiple workplace groups. The additional element that this is being done by software running on a computer “wherein the instructions further cause the computing system to” merely results in apply it or generally linking it to the field of computers as discussed above.
As per claim 20, Applicant recites activities a human or humans could perform. Specifically a human or humans could display user grouped comments by their associated verified emotional sentiment, and track and display changes in verified emotional sentiment over time for individual users or defined organizational groups. The additional element that this is being done by software running on a computer “wherein the instructions further cause the computing system to” merely results in apply it or generally linking it to the field of computers as discussed above.
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims merely recite limitations that are not indicative of an inventive concept (“significantly more”) in that the claims merely recite:
(1) Adding the words “apply it” ( or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)) and (2) Generally linking the use of the judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)), as detailed above under the practical application step.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Williams et al. (United States Patent Application Publication Number: US 2016/0203500) further in view of Nicolov et al. (United States Patent Application Publication Number: US 2009/0306967) further in view of Kim et al. (United States Patent Application Publication Number: US 2015/0228198).
As per claim 1, Williams et al. teaches A computing system for collecting, verifying, and reporting data from survey participants, comprising: (see abstract, Examiner’s note: system for improved remote processing and interaction with an artificial survey administrator).
one or more participant devices, each configured to: (see paragraphs 0074-0075, Examiner’s note: system used in connection with computer terminals, where computer terminals can be things like mobile devices or cell phones).
present a digital survey comprising a plurality of questions related to a topic of interest; receive one or more responses from a user, including selected answers and optionally written comments; (see Figures 2-3, Examiner’s note: show survey with multiple questions).
a server comprising one or more processors and memory for storing instructions that, when executed, cause the server to: (see paragraphs 0074 and 0082, Examiner’s note: software running on a server performing functions).
receive and store the one or more responses from the one or more participant devices; analyze, via an analyzer module and prior to completion of the digital survey, the one or more responses including structured answers and sentiment indicators to generate an inferred emotional sentiment for each user; (see paragraph 0014-0015 and 0023, Examiner’s note: determining user sentiment and subsequent questions will be determined based on user response (see paragraph 0014), further teaches structured and unstructured data (see paragraphs 0015, 0023). Further teaches if the confidence level is below a threshold for the sentiment prompting the user for confirmation of their perceived sentiment with a question rating).
transmit the inferred emotional sentiment to the one or more participant devices for presentation to the user together with a confirmation prompt, the confirmation prompt comprising a confirmation option and a correction option, the correction option enabling the user to select a revised emotional sentiment from a set of selectable emotional states; record a verified emotional sentiment based on user confirmation or correction of the inferred emotional sentiment (see paragraphs 0029-0032, and 0040-0042, Examiner’s note: teaches if the confidence level is below a threshold prompting the user for confirmation of their perceived sentiment with a question rating (See paragraph 0040). If above a threshold level of confidence no confirmation prompt is needed (see paragraph 0041). Further here shows examples of a user being asked follow up rating questions related to their first rating (see paragraphs 0029-0032 and 0041-0042)).
and store the verified emotional sentiment in a verified data repository and additionally an unverified data repository storing the one or more responses; (see paragraph 0075, Examiner’s note: teaches storing information in memory for performing the methods and apparatus described).
associate written comments with the verified emotional sentiment; (see paragraph 0015, 0041, and Figures 2-3, Examiner’s note: shows using both written (text) comments and buttons in a survey).
Williams does not expressly teach aggregating survey data for filtering and display according to constraints or more specifically as recited in the claims (1) and aggregate the verified data for presentation in one or more reporting interfaces; and compare, on a per-user basis, the inferred emotional sentiment with the verified emotional sentiment to generate verification accuracy data quantifying a proportion of users whose verified emotional sentiment differed from the inferred emotional sentiment; an administrator interface configured to: display aggregated emotional sentiment data and comment summaries for one or more workplace groups; allow filtering of the aggregated emotional sentiment data based on one or more parameters including topic, department, location, or employee demographics; display visualizations showing sentiment distribution, group comparisons, and user- submitted commentary linked to aggregated emotional sentiment data; and display the verification accuracy data in a summary panel showing segmented proportions of users whose emotional state shifted positively, shifted negatively, or remained unchanged relative to the inferred emotional sentiment And (2) storing information distinct from other information.
However, Nicolov which is in the art of sentiment analysis of surveys teaches (1) and aggregate the verified data for presentation in one or more reporting interfaces; (see paragraph 0082 and Figure 3, Examiner’s note: graphical or textual representation of the sentiment analysis).
and compare, on a per-user basis, the inferred emotional sentiment with the verified emotional sentiment to generate verification accuracy data quantifying a proportion of users whose verified emotional sentiment differed from the inferred emotional sentiment; (see paragraphs 0071 and 0074, Examiner’s note: teaches tracking the sentiment of an answer group over time, where answer group may be a group that shares a characteristic (see paragraph 0074). Further teaches analyzing two different surveys over a period of time that ask the same question yet are worded differently (see paragraph 0071) to determines sentiment analysis).
an administrator interface configured to: display aggregated emotional sentiment data and comment summaries for one or more workplace groups; allow filtering of the aggregated emotional sentiment data based on one or more parameters including topic, department, location, or employee demographics; (see paragraph 0082 and Figure 3, Examiner’s note: graphical or textual representation of the sentiment analysis. Shows based on customer service, service department, and sales staff).
display visualizations showing sentiment distribution, group comparisons, and user- submitted commentary linked to aggregated emotional sentiment data; (see paragraph 0071, 0073-0074, and 0082, Examiner’s note: teaches providing graphical or textual representations of the sentiment analysis (see paragraph 0082). Further teaches this sentiment analysis includes based on gender, e.g. group comparisons (see paragraph 0073),
and display the verification accuracy data in a summary panel showing segmented proportions of users whose emotional state shifted positively, shifted negatively, or remained unchanged relative to the inferred emotional sentiment further (see paragraphs 0056, 0071, 0074, 0082, tracking user sentiment over time (see paragraph 0071 and 0074) where the sentiment may be a score either positive or negative with a confidence (See paragraph 0056). Further teaches providing graphical and textual representations of the sentiment analysis (see paragraph 0082) and teaches this information may be discovered by answering open ended questions).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Williams with the aforementioned teachings from Nivolv with the motivation of providing a common way to display results of a survey to interested parties (see Nivolv paragraphs 0071, 0074, and 0082), when using consumer surveys to gather data on attitudes, impressions, satisfaction level etc. of customers is known (see Williams paragraph 0015).
Williams in view of Nivolv does not expressly teach and (2) storing information distinct from other information.
However, Kim et al. which is in the art of surveys (see paragraphs 0070-0071) teaches and (2) storing information distinct from other information (see paragraph 0071, Examiner’s note: teaches dividing the data contained in the survey information database into multiple tiers of memory can allow efficient use of storage resources by placing items that are desired to be quickly accessible in one storage and less quickly in another storage).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Williams in view of Nivolv with the aforementioned teachings from Kim et al. with the motivation of providing a known way to allow the efficient use of resources by placing items desired to be quickly accessible in locations other than other information that is not desired to be easily accessible (see Kim et al. paragraph 0071), when storing information for use by the system to perform the functions/processes is known (see Williams paragraph 0075).
As per claim 2, Williams teaches
wherein the memory is a non-transitory computer-readable storage medium; and wherein the instructions stored on the non-transitory computer-readable storage medium further cause the server to present, to the each user, a confirmation prompt asking whether the inferred emotional sentiment accurately reflects how the user feels. (see paragraphs 0029-0032, 0041-0042, Examiner’s note: here shows a user being asked follow up questions rating questions related to their first rating to confirm sentiment when below a confidence level).
As per claim 3, Williams teaches
wherein the server is further configured to update the verified emotional sentiment based on a user-selected correction to the proposed emotional sentiment. (see paragraphs 0029-0032, 0041-0042, Examiner’s note: here shows a user being asked follow up questions rating questions related to their first rating).
As per claim 4, Williams teaches
wherein each written comment in the comment summaries submitted by the user is tagged with a label corresponding to the verified emotional sentiment of the user (see paragraph 0014, 0040-0042, Examiner’s note; users comments are processed to ask more specific questions about a user’s rating regarding a specific topic)
Williams does not expressly teach tracking when a comment was submitted or more specifically as recited in the claims at a time the comment was submitted.
However, Nikolov which is in the art of sentiment analysis of surveys (see abstract) teaches racking when a comment was submitted or more specifically as recited in the claims at a time the comment was submitted (see paragraphs 0027 and 0073, Examiner’s note: tracking discussions over time).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Williams in view of Nivolv in view of Kim with the aforementioned teachings from Nivolv with the motivation of tracking a common feature of survey information like time when a question was answered to be able to track opinions and changing opinions over time (see Nivolv paragraphs 0027, 0023, 0071), when using consumer surveys to gather data on attitudes, impressions, satisfaction level etc. of customers is known (see Williams paragraph 0015).
As per claim 5, Williams does not expressly teach wherein the administrator interface is further configured to display a distribution of emotional sentiment values across a plurality of workplace groups in a comparative format.
However, Nikolov which is in the art of sentiment analysis of surveys (see abstract) teaches wherein the administrator interface is further configured to display a distribution of emotional sentiment values across a plurality of workplace groups in a comparative format (see paragraphs 0071, 0074, 0082, and Examiner’s note: teaches graphical or textual representation of semantic analysis (see paragraph 0082) where this is related to sales staff or service department. Further teaches aggregating across groups with a common characteristic of determined sentiment analysis (see paragraphs 0071 and 0074).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Williams in view of Nivolv in view of Kim with the aforementioned teachings from Nivolv with the motivation of providing a common way to display results of a survey to interested parties (see Nivolv paragraphs 0071, 0074, and 0082), when using consumer surveys to gather data on attitudes, impressions, satisfaction level etc. of customers is known (see Williams paragraph 0015).
As per claim 6, Williams does not expressly teach wherein the administrator interface is configured to filter sentiment results by at least one of a survey topic, a department, a location, an employee role, and a demographic attribute.
However, Nikolov which is in the art of sentiment analysis of surveys (see abstract) teaches wherein the administrator interface is configured to filter sentiment results by at least one of a survey topic, a department, a location, an employee role, and a demographic attribute. (see paragraphs 0071, 0074, 0082, and Examiner’s note: teaches graphical or textual representation of semantic analysis (see paragraph 0082) where this is related to sales staff or service department. Further teaches aggregating across groups with a common characteristic of determined sentiment analysis (see paragraphs 0071 and 0074).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Williams in view of Nivolv in view of Kim with the aforementioned teachings from Nivolv with the motivation of providing a common way to display results of a survey to interested parties according to interested constraints (see Nivolv paragraphs 0071, 0074, and 0082), when using consumer surveys to gather data on attitudes, impressions, satisfaction level etc. of customers is known (see Williams paragraph 0015).
As per claim 7, Williams does not expressly teach wherein the administrator interface further displays employee comments in a format grouped by associated emotional sentiment.
However, Nikolov which is in the art of sentiment analysis of surveys (see abstract) teaches wherein the administrator interface further displays employee comments in a format grouped by associated emotional sentiment (see paragraphs 0081-0083, Examiner’s note: teaches displaying sentiment where these may be employees like sales staff or service department based on sentiment).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Williams in view of Nivolv in view of Kim with the aforementioned teachings from Nivolv with the motivation of providing a common way to display results of a survey to interested parties according to interested constraints (see Nivolv paragraphs 0071, 0074, and 0082), when using consumer surveys to gather data on attitudes, impressions, satisfaction level etc. of customers is known (see Williams paragraph 0015).
As per claim 8,Williams does not expressly teach wherein the server is configured to track and display changes in emotional sentiment over time for individual users or defined workplace groups.
However, Nikolov which is in the art of sentiment analysis of surveys (see abstract) teaches wherein the server is configured to track and display changes in emotional sentiment over time for individual users or defined workplace groups (see paragraphs 0071, 0074, 0082, and 0087, Examiner’s note: method performed by a server (see paragraph 0087), further teaches tracking sentiment over time (see paragraphs 0071 and 0074) and displaying information related to sentiment where these may be related to workplace groups (see paragraph 0082)).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Williams in view of Nivolv in view of Kim with the aforementioned teachings from Nivolv with the motivation of providing a common way to display results of a survey to interested parties according to interested constraints (see Nivolv paragraphs 0071, 0074, and 0082), when using consumer surveys to gather data on attitudes, impressions, satisfaction level etc. of customers is known (see Williams paragraph 0015).
As per claim 9, Williams teaches A method for collecting, verifying, and reporting emotional sentiment data from survey participants, comprising: (see abstract and paragraph 0004, Examiner’s note: method for improved remote processing and interaction with an artificial survey administrator).
presenting, via one or more participant devices, (see paragraphs 0074-0075, Examiner’s note: system used in connection with computer terminals, where computer terminals can be things like mobile devices or cell phones).
a digital survey comprising a plurality of questions related to a topic of interest; (see Figures 2-3, Examiner’s note: show survey with multiple questions).
receiving, from a user, one or more responses to the digital survey, including selected answers and optionally written comments; (see Figures 2-3, Examiner’s note: show survey with multiple questions).
analyzing, via an analyzer module executing on a server and prior to completion of the digital survey, the one or more responses including structured answers and sentiment indicators to generate an inferred emotional sentiment for the user; see paragraph 0014-0015 and 0023, Examiner’s note: determining user sentiment and subsequent questions will be determined based on user response (see paragraph 0014), further teaches structured and unstructured data (see paragraphs 0015, 0023). Further teaches if the confidence level is below a threshold for the sentiment prompting the user for confirmation of their perceived sentiment with a question rating).
presenting the inferred emotional sentiment to the user and prompting the user to confirm or revise the inferred emotional sentiment by displaying a confirmation prompt comprising a confirmation option and a correction option, the correction option enabling the user to select a revised emotional sentiment from a set of selectable emotional states; receiving, from the user, a verified emotional sentiment; (see paragraphs 0015, 0029-0032, and 0040-0042, Examiner’s note: teaches if the confidence level is below a threshold prompting the user for confirmation of their perceived sentiment with a question rating (See paragraph 0040). If above a threshold level of confidence no confirmation prompt is needed (see paragraph 0041). Further here shows examples of a user being asked follow up rating questions related to their first rating (see paragraphs 0029-0032 and 0041-0042)).
associating the verified emotional sentiment with the one or more responses and any written comments; (see paragraph 0015, 0041, and Figures 2-3, Examiner’s note: shows using both written (text) comments and buttons in a survey).
storing the verified emotional sentiment in association with a survey record of the user in a verified data repository and additionally an unverified data repository storing the one or more responses; (see paragraph 0075, Examiner’s note: teaches storing information in memory for performing the methods and apparatus described).
Williams does not expressly teach aggregating survey data for filtering and display according to constraints or more specifically as recited in the claims (1) comparing, on a per-user basis, the inferred emotional sentiment with the verified emotional sentiment to generate verification accuracy data quantifying a proportion of users whose verified emotional sentiment differed from the inferred emotional sentiment; displaying, via an administrator interface, one or more visualizations of aggregated emotional sentiment data and associated commentary for individual users or workplace groups; and displaying, via the administrator interface, the verification accuracy data in a summary panel showing segmented proportions of users whose emotional state shifted positively, shifted negatively, or remained unchanged relative to the inferred emotional sentiment and (2) storing information distinct from other information.
However, Nicolov which is in the art of sentiment analysis of surveys teaches (1)
comparing, on a per-user basis, the inferred emotional sentiment with the verified emotional sentiment to generate verification accuracy data quantifying a proportion of users whose verified emotional sentiment differed from the inferred emotional sentiment (see paragraphs 0071 and 0074, Examiner’s note: teaches tracking the sentiment of an answer group over time, where answer group may be a group that shares a characteristic (see paragraph 0074). Further teaches analyzing two different surveys over a period of time that ask the same question yet are worded differently (see paragraph 0071) to determines sentiment analysis).
displaying, via an administrator interface, one or more visualizations of aggregated emotional sentiment data and associated commentary for individual users or workplace groups; (see paragraph 0082 and Figure 3, Examiner’s note: graphical or textual representation of the sentiment analysis. Shows based on customer service, service department, sales staff).
and displaying, via the administrator interface, the verification accuracy data in a summary panel showing segmented proportions of users whose emotional state shifted positively, shifted negatively, or remained unchanged relative to the inferred emotional sentiment (see paragraphs 0056, 0071, 0074, 0082, tracking user sentiment over time (see paragraph 0071 and 0074) where the sentiment may be a score either positive or negative with a confidence (See paragraph 0056). Further teaches providing graphical and textual representations of the sentiment analysis (see paragraph 0082) and teaches this information may be discovered by answering open ended questions).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Williams with the aforementioned teachings from Nivolv with the motivation of providing a common way to display results of a survey to interested parties (see Nivolv paragraphs 0071, 0074, and 0082), when using consumer surveys to gather data on attitudes, impressions, satisfaction level etc. of customers is known (see Williams paragraph 0015).
Williams in view of Nivolv does not expressly teach and (2) storing information distinct from other information.
However, Kim et al. which is in the art of surveys (see paragraphs 0070-0071) teaches and (2) storing information distinct from other information (see paragraph 0071, Examiner’s note: teaches dividing the data contained in the survey information database into multiple tiers of memory can allow efficient use of storage resources by placing items that are desired to be quickly accessible).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Williams in view of Nivolv with the aforementioned teachings from Kim et al. with the motivation of providing a known way to allow the efficient use of resources by placing items desired to be quickly accessible in locations other than other information that is not desired to be easily accessible (see Kim et al. paragraph 0071), when storing information for use by the system to perform the functions/processes is known (see Williams paragraph 0075).
As per claim 10, Williams teaches
further comprising presenting a confirmation prompt to the user asking whether the inferred emotional sentiment accurately reflects how the user feels. (see paragraphs 0029-0032, 0041-0042, Examiner’s note: here shows a user being asked follow up questions rating questions related to their first rating to confirm sentiment when below a confidence level).
As per claim 11, Williams teaches
updating the verified emotional sentiment based on a user selection to revise the inferred emotional sentiment; (see paragraphs 0029-0032, 0041-0042, Examiner’s note: here shows a user being asked follow up questions rating questions related to their first rating).
tagging each written comment submitted by the user with a label corresponding to the verified emotional sentiment; (see paragraph 0014, 0040-0042, Examiner’s note; users comments are processed to ask more specific questions about a user’s rating regarding a specific topic)
Williams does not expressly each and filtering the aggregated emotional sentiment data based on at least one of a survey topic, a department, a location, an employee role, and a demographic attribute.
However, Nikolov which is in the art of sentiment analysis of surveys (see abstract) teaches and filtering the aggregated emotional sentiment data based on at least one of a survey topic, a department, a location, an employee role, and a demographic attribute. (see paragraphs 0071, 0074, 0082, and Examiner’s note: teaches graphical or textual representation of semantic analysis (see paragraph 0082) where this is related to sales staff or service department. Further teaches aggregating across groups with a common characteristic of determine such sentiment analysis (see paragraphs 0071 and 0074).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Williams in view of Nivolv in view of Kim with the aforementioned teachings from Nivolv with the motivation of providing a common way to display results of a survey to interested parties according to interested constraints (see Nivolv paragraphs 0071, 0074, and 0082), when using consumer surveys to gather data on attitudes, impressions, satisfaction level etc. of customers is known (see Williams paragraph 0015).
As per claim 12, Williams does not expressly teach teaches wherein the one or more visualizations comprise a distribution of emotional sentiment values across a plurality of workplace groups.
However, Nikolov which is in the art of sentiment analysis of surveys (see abstract) teaches wherein the one or more visualizations comprise a distribution of emotional sentiment values across a plurality of workplace groups (see paragraphs 0081-0083, Examiner’s note: teaches displaying sentiment where these may be employees like sales staff or service department based on sentiment).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Williams in view of Nivolv in view of Kim with the aforementioned teachings from Nivolv with the motivation of providing a common way to display results of a survey to interested parties according to interested constraints (see Nivolv paragraphs 0071, 0074, and 0082), when using consumer surveys to gather data on attitudes, impressions, satisfaction level etc. of customers is known (see Williams paragraph 0015).
As per claim 13, Williams does not expressly teach further comprising: grouping and displaying user submitted comments according to the associated verified emotional sentiment; and tracking changes in the associated verified emotional sentiment over time for individual users or predefined organizational groups.
However, Nikolov which is in the art of sentiment analysis of surveys (see abstract) teaches further comprising: grouping and displaying user submitted comments according to the associated verified emotional sentiment; and tracking changes in the associated verified emotional sentiment over time for individual users or predefined organizational groups (see paragraphs 0071, 0074, 0082, and 0087, Examiner’s note: method performed by a server (see paragraph 0087), further teaches tracking sentiment over time (see paragraphs 0071 and 0074) and displaying information related to sentiment where these may be related to workplace groups (see paragraph 0082)).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Williams in view of Nivolv in view of Kim with the aforementioned teachings from Nivolv with the motivation of providing a common way to display results of a survey to interested parties according to interested constraints (see Nivolv paragraphs 0071, 0074, and 0082), when using consumer surveys to gather data on attitudes, impressions, satisfaction level etc. of customers is known (see Williams paragraph 0015).
As per claim 14, Williams teaches A non-transitory computer-readable storage medium storing instructions that, when executed by one or more processors, cause a computing system to perform a method for collecting, verifying, and reporting emotional sentiment data, the method comprising: (see paragraphs 0080-0083, Examiner’s note: computer readable containing a program being run by a device to perform the functions).
presenting a digital survey to a user via a participant device, (see paragraphs 0074-0075, Examiner’s note: system used in connection with computer terminals, where computer terminals can be things like mobile devices or cell phones).
the digital survey comprising a plurality of questions related to a topic of interest; (see Figures 2-3, Examiner’s note: show survey with multiple questions).
receiving one or more survey responses from the user, including selected answers and optionally written comments; analyzing, via an analyzer module and prior to completion of the digital survey, the responses including structured answers and sentiment indicators to generate an inferred emotional sentiment; (see paragraph 0014-0015 and 0023, Examiner’s note: determining user sentiment and subsequent questions will be determined based on user response (see paragraph 0014), further teaches structured and unstructured data (see paragraphs 0015, 0023). Further teaches if the confidence level is below a threshold for the sentiment prompting the user for confirmation of their perceived sentiment with a question rating).
prompting the user to confirm or revise the inferred emotional sentiment by displaying a confirmation prompt comprising a confirmation option and a correction option, the correction option enabling the user to select a revised emotional sentiment from a set of selectable emotional states; recording a verified emotional sentiment based on input from the user; (see paragraphs 0015, 0029-0032, and 0040-0042, Examiner’s note: teaches if the confidence level is below a threshold prompting the user for confirmation of their perceived sentiment with a question rating (See paragraph 0040). If above a threshold level of confidence no confirmation prompt is needed (see paragraph 0041). Further here shows examples of a user being asked a follow up rating questions related to their first rating (see paragraphs 0029-0032 and 0041-0042)).
associating the verified emotional sentiment with the one or more survey responses and any written comments; (see paragraph 0015, 0041, and Figures 2-3, Examiner’s note: shows using both written (text) comments and buttons in a survey).
aggregating the verified emotional sentiment data from multiple users in a verified data repository and additionally an unverified data repository storing the one or more survey responses; (see paragraph 0075, Examiner’s note: teaches storing information in memory for performing the methods and apparatus described).
Williams does not expressly teach aggregating survey data for filtering and display according to constraints or more specifically as recited in the claims (1) comparing, on a per-user basis, the inferred emotional sentiment with the verified emotional sentiment to generate verification accuracy data quantifying a proportion of users whose verified emotional sentiment differed from the inferred emotional sentiment; generating one or more visualizations of the aggregated verified emotional sentiment data for presentation through an administrator interface; and generating a verification accuracy visualization displaying segmented proportions of users whose emotional state shifted positively, shifted negatively, or remained unchanged relative to the inferred emotional sentiment and (2) storing information distinct from other information.
However, Nicolov which is in the art of sentiment analysis of surveys teaches (1)
comparing, on a per-user basis, the inferred emotional sentiment with the verified emotional sentiment to generate verification accuracy data quantifying a proportion of users whose verified emotional sentiment differed from the inferred emotional sentiment; (see paragraphs 0071 and 0074, Examiner’s note: teaches tracking the sentiment of an answer group over time, where answer group may be a group that shares a characteristic (see paragraph 0074). Further teaches analyzing two different surveys over a period of time that ask the same question yet are worded differently (see paragraph 0071) to determines sentiment analysis).
generating one or more visualizations of the aggregated verified emotional sentiment data for presentation through an administrator interface; (see paragraph 0082 and Figure 3, Examiner’s note: graphical or textual representation of the sentiment analysis. Shows based on customer service, service department, sales staff).
and generating a verification accuracy visualization displaying segmented proportions of users whose emotional state shifted positively, shifted negatively, or remained unchanged relative to the inferred emotional sentiment (see paragraphs 0056, 0071, 0074, 0082, tracking user sentiment over time (see paragraph 0071 and 0074) where the sentiment may be a score either positive or negative with a confidence (See paragraph 0056). Further teaches providing graphical and textual representations of the sentiment analysis (see paragraph 0082) and teaches this information may be discovered by answering open ended questions).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Williams with the aforementioned teachings from Nivolv with the motivation of providing a common way to display results of a survey to interested parties (see Nivolv paragraphs 0071, 0074, and 0082), when using consumer surveys to gather data on attitudes, impressions, satisfaction level etc. of customers is known (see Williams paragraph 0015).
Williams in view of Nivolv does not expressly teach and (2) storing information distinct from other information.
However, Kim et al. which is in the art of surveys (see paragraphs 0070-0071) teaches and (2) storing information distinct from other information (see paragraph 0071, Examiner’s note: teaches dividing the data contained in the survey information database into multiple tiers of memory can allow efficient use of storage resources by placing items that are desired to be quickly accessible).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Williams in view of Nivolv with the aforementioned teachings from Kim et al. with the motivation of providing a known way to allow the efficient use of resources by placing items desired to be quickly accessible in locations other than other information that is not desired to be easily accessible (see Kim et al. paragraph 0071), when storing information for use by the system to perform the functions/processes is known (see Williams paragraph 0075).
As per claim 15, Williams teaches
wherein the instructions further cause the computing system to present a confirmation prompt asking the user to verify whether the inferred emotional sentiment accurately reflects how the user feels. (see paragraphs 0029-0032, 0041-0042, Examiner’s note: here shows a user being asked follow up questions rating questions related to their first rating to confirm sentiment when below a confidence level).
As per claim 16, Williams teaches
wherein the instructions further cause the computing system to receive a revised emotional sentiment from the user (see paragraphs 0029-0032, 0041-0042, Examiner’s note: here shows a user being asked follow up questions rating questions related to their first rating).
and store the revised emotional sentiment as a verified emotional sentiment. (see paragraph 0075, Examiner’s note: teaches storing information in memory for performing the methods and apparatus described).
As per claim 17, Williams teaches
wherein the instructions further cause the computing system to associate a label with each written comment submitted by the user, the label corresponding to the verified emotional sentiment. (see paragraph 0014, 0040-0042, Examiner’s note; users comments are processed to ask more specific questions about a user’s rating regarding a specific topic)
As per claim 18, Williams does not expressly teach wherein the instructions further cause the computing system to filter the aggregated verified sentiment data based on at least one of: a topic, a department, an organizational unit, a location, and an employee demographic attribute.
However, Nikolov which is in the art of sentiment analysis of surveys (see abstract) teaches wherein the instructions further cause the computing system to filter the aggregated verified sentiment data based on at least one of: a topic, a department, an organizational unit, a location, and an employee demographic attribute. (see paragraphs 0071, 0074, 0082, and Examiner’s note: teaches graphical or textual representation of semantic analysis (see paragraph 0082) where this is related to sales staff or service department. Further teaches aggregating across groups with a common characteristic of determine such sentiment analysis (see paragraphs 0071 and 0074).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Williams in view of Nivolv in view of Kim with the aforementioned teachings from Nivolv with the motivation of providing a common way to display results of a survey to interested parties according to interested constraints (see Nivolv paragraphs 0071, 0074, and 0082), when using consumer surveys to gather data on attitudes, impressions, satisfaction level etc. of customers is known (see Williams paragraph 0015).
As per claim 19, Williams does not expressly teach wherein the instructions further cause the computing system to generate a visualization comprising a distribution of emotional sentiment values across multiple workplace groups.
However, Nikolov which is in the art of sentiment analysis of surveys (see abstract) teaches wherein the instructions further cause the computing system to generate a visualization comprising a distribution of emotional sentiment values across multiple workplace groups (see paragraphs 0081-0083, Examiner’s note: teaches displaying sentiment where these may be employees like sales staff or service department based on sentiment).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Williams in view of Nivolv in view of Kim with the aforementioned teachings from Nivolv with the motivation of providing a common way to display results of a survey to interested parties according to interested constraints (see Nivolv paragraphs 0071, 0074, and 0082), when using consumer surveys to gather data on attitudes, impressions, satisfaction level etc. of customers is known (see Williams paragraph 0015).
As per claim 20, Williams does not expressly teach wherein the instructions further cause the computing system to: display user comments grouped by their associated verified emotional sentiment; and track and display changes in verified emotional sentiment over time for individual users or defined organizational groups.
However, Nikolov which is in the art of sentiment analysis of surveys (see abstract) teaches wherein the instructions further cause the computing system to: display user comments grouped by their associated verified emotional sentiment; and track and display changes in verified emotional sentiment over time for individual users or defined organizational groups (see paragraphs 0071, 0074, 0082, and 0087, Examiner’s note: method performed by a server (see paragraph 0087), further teaches tracking sentiment over time (see paragraphs 0071 and 0074) and displaying information related to sentiment where these may be related to workplace groups (see paragraph 0082)).
Before the effective filing date of the claimed invention it would have been obvious for one of ordinary skill in the art to have modified Williams in view of Nivolv in view of Kim with the aforementioned teachings from Nivolv with the motivation of providing a common way to display results of a survey to interested parties according to interested constraints (see Nivolv paragraphs 0071, 0074, and 0082), when using consumer surveys to gather data on attitudes, impressions, satisfaction level etc. of customers is known (see Williams paragraph 0015).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Williams et al. (United States Patent Application Publication Number: US 2016/0203500) teaches the system prompts the customer to confirm their perceived sentiment with a rating question. For example if the textual analysis of the consumer comment resulted in a possible negative consumer sentiment with respect to a service, but the confidence interval was below that set by the user, a specific rating question would be generated and posted (See paragraph 0040)
Chun (United States Patent Application Publication Number: US 2016/0371374) teaches automatically analyzing the sentiment of a paragraph of text written as a review for a product or service and then displaying the sentiment as a feedback for the user to validate (see paragraph 0010)
Fisher et al. (United States Patent Application Publication Number: US 2019/0228357) teaches a survey that tracks sentiment and then providing the aggregated results of a team across employees on an interface (see Figures 2-5)
Kieser et al. (United States Patent Application Publication Number: US 2020/0004816) teaches analyzing text to determine sentiment across various comments and displaying the resulting aggregated sentiment in an interface (see Figures 5-11)
Timms et al. (United States Patent Application Publication Number: US 2023/0119405) teaches a survey that includes multiple choices and presenting the aggregated results across surveys in a display on the interface, where this survey relates to employees (see Figures 5-10)
Marks (United States Patent Application Publication Number: US 2023/0307134) teaches using AI to capture the emotional state of a user from an assessment and providing the results and interventions to a user on a screen, where this information relates to employees (see Figures 5-7)
Childress (United States Patent Application Publication Number: US 2023/0410022) teaches an employee (employment) survey with multiple questions and aggregating the employee information in dashboards across employees (see Figures 5-9)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KIERSTEN SUMMERS whose telephone number is (571)272-6542. The examiner can normally be reached Monday - Friday 7am-3:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nathan Uber can be reached on 5712703923. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users.
To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format.
For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KIERSTEN V SUMMERS/Primary Examiner, Art Unit 3626