DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
2. The Amendment filed on December 11, 2025, has been entered. The examiner acknowledges the amendments to claims 1, 2, 7, 10, 14, 15, 16, and 18.
Rejections under 35 U.S.C. § 101: Applicant argues that the claims are not directed to a method of organizing human activity (e.g., mitigating risk, interactions between people, following rules or instructions). The Examiner will counter that the act of administering a survey requires the participation of another person with some sort of organized question and answer exchange, that organization built upon interactions, rules, and likely the respondent following instructions. Further, the machine learning model adheres to those rules, instructions in ensuring continuity while modifying the original survey.
Additional arguments that the claims are not directed to a mental process ignore the underlying principles upon which the invention portends to operate. The invention applies automation to execute mental processes and at speeds beyond human capabilities. This does not eliminate the underlying processes employed.
Applicant argues the claims integrate the exception into a practical application. The Examiner notes that the specification indicates that the additional elements are employed in performing functions that are well-understood, routine, and conventional. From all indications the user interface via which the electronic survey is administered is likely a display and keyboard, although voice recognition capabilities are possible. None of these meaningfully limit the exception and integrate the exception into a practical application. It is difficult to imagine a processor supporting an invention of this sort without user interface devices at multiple levels. What is not evident is the application of additional elements beyond routine or conventional roles. Further, machine learning is disclosed as providing real time modification in support of survey execution, as the arguments state the survey is modified based on the correspondence between the first electronic survey question and the second electronic survey question. What is not apparent here is how the machine learning (model) responds to the aforementioned correspondence and how it produces the second electronic survey question, other than what appears to be “de-duplication.” Survey design routinely traces paths of questioning to lead to specific topical conclusions. This is a non-trivial process where alternative paths are predicted and questions developed accordingly. It is not disclosed if the machine learning model employed is developing questions in real time or more likely, following pre-engineered paths that “correspond” to answers provided in previous questions. If the former is true, this should be revealed. In its absence, the appearance is that the invention follows predetermined paths that might involve look up tables to provide paths or elements of paths to elicit the desired output from the survey respondents. The practical application here is not evident and training of the machine learning model in question design is absent from the technical disclosure.
The improvement to technology is not apparent, and in the absence of a practical application, the rejections to the claims under 35 U.S.C. § 101 will not be withdrawn.
Rejections under 35 U.S.C. § 102 and § 103: Applicant’s arguments in favor of claims 1, 10 and 18 appear to argue that the addition of Smith fails to cure the deficiencies of the rejections of claims 1, 10, and 18. The Examiner notes that claims 1, 10, and 18 were rejected under § 102, and the amended claims with additional search are also rejected under § 102.
Applicant continues to argue that claims 7, 9, 14, and 17 are patentable over Long and Smith for at least the reason that they depend from claims 1, 10, and 18. The Examiner disagrees and the rejections under 35 U.S.C. § will not be withdrawn.
Claim Rejections – 35 U.S.C. § 101
35 U.S.C. § 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to non-statutory subject matter. The claims, 1-20 are directed to a judicial exception (i.e., law of nature, natural phenomenon, abstract idea) without providing significantly more.
Step 1
Step 1 of the subject matter eligibility analysis per MPEP § 2106.03, required the claims to be a process, machine, manufacture or a composition of matter. Claims 1-20 are directed to a process (method), machine (system), and product/article of manufacture, which are statutory categories of invention.
Step 2A
Claims 1-20 are directed to abstract ideas, as explained below.
Prong one of the Step 2A analysis requires identifying the specific limitation(s) in the claim under examination that the examiner believes recites an abstract idea, and determining whether the identified limitation(s) falls within at least one of the groupings of abstract ideas of mathematical concepts, mental processes, and certain methods of organizing human activity.
Step 2A-Prong 1
The claims recite the following limitations that are directed to abstract ideas, which can be summarized as being directed to a method, the abstract idea, of deduplicating questions in electronic surveys based on natural language processing and question classification to produce modified surveys.
Claim 1 discloses: A method comprising:
administering, a survey; (organizing human activity, following rules or instructions, observation, evaluation, judgement, opinion),
receiving, one or more user inputs corresponding to the survey, (following rules or instructions, observation, evaluation, judgement, opinion),
determining a plurality of domain classifications for a plurality of questions in the survey; (organizing human activity, following rules or instructions, observation, evaluation, judgement, opinion),
determining, based on semantic content for the plurality of survey questions, a correspondence between a first survey question of survey mapped to a domain classification and a second survey question of one or more prior surveys mapped to the domain classification; (organizing human activity, following rules or instructions, observation, evaluation, judgement, opinion),
responsive to the one or more user inputs, modifying the survey based on the correspondence between the first survey question and the second survey question; (organizing human activity, following rules or instructions, observation, evaluation, judgement, opinion), and
administering the modified survey, (organizing human activity, following rules or instructions, observation, evaluation, judgement, opinion).
Additional limitations employ the method generating a first classification indicating the domain classification for the first and second questions according to a first topic, and generating a second classification indicating an additional classification for a third question according to a second topic, (organizing human activity, mitigating risk, interactions between people, following rules or instructions, observation, evaluation, judgement and opinion – claim 2), where determining the correspondence between the first two questions is based on a determination of similar semantic content based on phrases in the two questions, and being mapped to the domain classification, (organizing human activity, mitigating risk, interactions between people, following rules or instructions, observation, evaluation, judgement and opinion – claim 3), determining that in response to the domain classification mapping, the first and second questions are semantically similar, (organizing human activity, mitigating risk, interactions between people, following rules or instructions, observation, evaluation, judgement and opinion – claim 4), determining the plurality of domain classifications is based on determining that first and second questions are mapped to the same domain based on similar semantic content, (organizing human activity, mitigating risk, interactions between people, following rules or instructions, observation, evaluation, judgement and opinion – claim 5), where modifying the survey involves selecting the first question to include in the modified survey based on the questions being mapped to the same domain and being semantically similar, (organizing human activity, mitigating risk, interactions between people, following rules or instructions, observation, evaluation, judgement and opinion – claim 6), where modifying the survey includes removing the second question based on the questions being mapped to the same domain and having similar content, (organizing human activity, mitigating risk, interactions between people, following rules or instructions, observation, evaluation, judgement and opinion – claim 7), where determining the plurality of classifications includes a determination of the first question mapping to the first survey and the second question mapping to the second survey, (organizing human activity, mitigating risk, interactions between people, following rules or instructions, observation, evaluation, judgement and opinion – claim 8), and where providing the modified survey includes providing the first question as part of the modified survey and excluding the second question from it, (organizing human activity, mitigating risk, interactions between people, following rules or instructions, observation, evaluation, judgement and opinion – claim 9),
Each of these claimed limitations employ mental processes involving organizing human activity, mitigating risk, following rules or instructions and observation, evaluation, judgement, and opinion.
Claims 10-20 recite similar abstract ideas as those identified with respect to claims 1-9.
Thus, the concepts set forth in claims 1-20 recite abstract ideas.
Step 2A-Prong 2
As per MPEP § 2106.04, while the claims 1-20 recite additional limitations which are hardware or software elements such as a computer, a user interface, an electronic survey, a machine-learning model, a natural language processing model, non-transitory computer-readable media comprising one or more machine-learning models, an administrator client device, these limitations are not sufficient to qualify as a practical application being recited in the claims along with the abstract ideas since these elements are invoked as tools to apply the instructions of the abstract ideas in a specific technological environment. The mere application of an abstract idea in a particular technological environment and merely limiting the use of an abstract idea to a particular technological field do not integrate an abstract idea into a practical application (MPEP § 2106.05 (f) & (h)).
Evaluated individually, the additional elements do not integrate the identified abstract ideas into a practical application. Evaluating the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually.
The claims do not amount to a “practical application” of the abstract idea because they neither (1) recite any improvements to another technology or technical field; (2) recite any improvements to the functioning of the computer itself; (3) apply the judicial exception with, or by use of, a particular machine; (4) effect a transformation or reduction of a particular article to a different state or thing; (5) provide other meaningful limitations beyond generally linking the use of the judicial exception to a particular technological environment.
Accordingly, claims 1-20 are directed to abstract ideas.
Step 2B
Claims 1-20 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination, do not amount to significantly more than the abstract idea.
The analysis above describes how the claims recite the additional elements beyond those identified above as being directed to an abstract idea, as well as why identified judicial exception(s) are not integrated into a practical application. These findings are hereby incorporated into the analysis of the additional elements when considered both individually and in combination.
For the reasons provided in the analysis in Step 2A, Prong 1, evaluated individually, the additional elements do not amount to significantly more than a judicial exception. Thus, taken alone, the additional elements do not amount to significantly more than a judicial exception.
Evaluating the claim limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. In addition to the factors discussed regarding Step 2A, prong two, there is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely amount to instructions to implement the identified abstract ideas on a computer.
Therefore, since there are no limitations in the claims 1-20 that transform the exception into a patent eligible application such that the claims amount to significantly more than the exception itself, the claims are directed to non-statutory subject matter and are rejected under 35 U.S.C. § 101.
Claim Rejections 35 U.S.C. §102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102(a)(1) that
form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on
sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-6, 8, 10-13, 15-16, 18-20 are rejected under 35 U.S.C. § 102(a)(1) as being taught by Long, (US 20200074294 A1), hereafter Long, “Machine-Learning-Based Digital Survey Creation and Management.”
Regarding Claim 1, A computer-implemented method comprising:
administering, via a user interface, an electronic survey; Long teaches, an environment 100 in which a digital survey system 118 operates, [ ], a survey administrator 102 is associated with the administrator device 104 and uses the administrator device 104 to manage the creation and distribution of a digital survey, [0039] and FIG. 1), receiving, via the user interface, one or more user inputs corresponding to the electronic survey, (Moreover, recipients 108a-108n are respectively associated with the recipient devices 110a-110n and use the recipient devices 110a-110nto provide responses to a digital survey, [0039] and FIG. 1),
determining, by at least one processor utilizing a machine-learning model, a plurality of domain classifications for a plurality of electronic survey questions in the electronic survey; (the survey-creation-machine learner 200 determines that an intent tag for the candidate-representative-survey question 304a matches an intent tag for the training survey question 302a. For instance, the survey-creation-machine learner 200 optionally comprises an RNN that compares an intent classification of the training survey question 302a to the intent classification of multiple candidate-representative-survey questions, [0063]),
determining, by the at least one processor and based on semantic content for the plurality of electronic survey questions, (in some implementations, the survey-creation-machine learner 200 extracts terms or words—or a combination of terms or words—from the training survey question 302a when identifying training textual features. For example, the survey-creation-machine learner 200 extracts terms and identifies an ordering of the extracted terms. To extract terms, in certain implementations, the digital survey system 118 uses an RNN or an RCNN as the survey-creation-machine learner 200. One such RNN and one such RCNN is described by Adrian Sanborn and Jacek Skryzalin, “Deep Learning for Semantic Similarity” (2015), [0059]), a correspondence [0063]matches intent tag between a first electronic survey question of electronic survey mapped to a domain classification and a second electronic survey question of one or more prior electronic surveys mapped to the domain classification; (For instance, the digital survey system may determine a survey category for the representative survey question from a correlation database that correlates representative survey questions with survey categories, [0054], and in some embodiments, each training survey question corresponds to a ground-truth-representative-survey question. Accordingly, in some cases, a training survey question correlates to annotated or tagged training data—that is, a corresponding ground-truth-representative-survey question. As shown in FIG. 3A, for instance, training survey questions 302a through 302n respectively correspond to ground-truth-representative-survey questions 308a through 308n. [0057], alternatively, the administrator device 104 may detect user input from the survey administrator 102 selecting the initial survey question 202 and the answer choices 204 (e.g., from a previous digital survey), [0052]),
responsive to the one or more user inputs, (the administrator device 104 sends (and the digital survey system 118 receives) user input to create an initial survey question 202. [0052], modifying the electronic survey based on the correspondence between the first electronic survey question and the second electronic survey question; (using an iterative process of inputs and outputs, the digital survey system118 determines candidate-representative-survey questions and compares those questions to ground-truth-representative-survey questions. The digital survey system 118 uses this comparison as a basis for adjusting machine-learning parameters of the survey-creation-machine learner 200 [0056], the disclosed systems use a survey-creation-machine learner to generate suggested survey questions for an administrator designing a digital survey, [0005]), and
administering, via the user interface, the modified electronic survey, (A survey
administrator is associated with the administrator device and uses the administrator device to manage the creation and distribution of a digital survey, [0039]).
Claims 10 and 18 are rejected for reasons corresponding to those provided for Claim 1. In these claims, the addition of one or more non-transitory computer readable media, and at least one processor, does not change the rational for the rejections under 35 U.S.C § 103 or the referenced prior art (Long teaches the system comprises non-transitory computer readable media [0005], and at least one electronic processor, [0040]).
Regarding Claim 2, The computer-implemented method of claim 1, wherein determining the plurality of domain classifications comprises:
generating, utilizing a machine-learning model, Long teaches, (machine-learning techniques to facilitate the creation, timing of distribution, or follow-up actions for digital surveys, [0005]), a first classification indicating the domain classification for the first electronic survey question and the second electronic survey question according to a first topic; (For instance, the digital survey system 118 may determine a survey category for the representative survey question from a correlation database that correlates representative survey questions with survey categories. [0054] and
generating, utilizing the machine-learning model, a second classification indicating an
additional domain classification for a third electronic survey question of the one or more electronic surveys according to a second topic, (the suggested survey questions 206aand 206b correspond to a survey category matching the survey category that the digital survey system 118 determines for the initial survey question 202. Alternatively, in some embodiments, suggested survey questions correspond to different survey categories or to no identifiable survey categories, [0054]).
Claims 11 and 19 are rejected for reasons corresponding to those provided for Claim 2. In these claims, the addition of one or more non-transitory computer readable media, and at least one processor, does not change the rational for the rejections under 35 U.S.C § 103 or the referenced prior art (Long teaches the system comprises non-transitory computer readable media [0005], and at least one electronic processor, [0040]).
Regarding claim 3, The computer-implemented method of claim 2, wherein determining the correspondence between the first electronic survey question and the second electronic survey question Long teaches, (the digital survey system provides selectable options
for suggested survey questions that correspond to a survey category, [0027]), comprises:
determining, utilizing a natural language processing model, that the first electronic survey question and the second electronic survey question comprise semantically similar content based on words or phrases included in the first electronic survey question and the second electronic survey question; (in some instances, the digital survey system uses a recursive neural network trained to identify textual similarity between survey questions or to determine intent of survey questions. As another example, in some embodiments, the digital survey system uses a recurrent neural network (“RNN”) or a Naive Bayes Support Vector Machine(“NBSVM”) to categorize or determine the intent of survey questions, [0028]), and
determining the correspondence
in response to the first electronic survey question and the second electronic survey question comprising the semantically similar content and being mapped to the domain classification, (the survey-creation-machine learner 200 selects the candidate-representative-survey question 304a (from among multiple candidate-representative-survey questions) as having the same intent classification as the training survey question 302a, [0063], [ ] and
the survey-creation-machine learner 200 determines that a reciprocal intent of the candidate-representative-survey question 304a corresponds to the intent of the training survey question 302a, [0063]. In some cases, the survey-creation-machine learner 200 comprises an RNN that determines a probability score that a given training survey question belongs in a same category as a candidate-representative-survey question [0062]).
Claims 12 and 20 are rejected for reasons corresponding to those provided for Claim 3. In these claims, the addition of one or more non-transitory computer readable media, and at least one processor, does not change the rational for the rejections under 35 U.S.C § 103 or the referenced prior art (Long teaches the system comprises non-transitory computer readable media [0005], and at least one electronic processor, [0040]).
Regarding claim 4, The computer-implemented method of claim 3, wherein determining the correspondence comprises determining, utilizing the natural language processing model, that the first electronic survey question and the second electronic survey question comprise the semantically similar content in response to determining that the first electronic survey question and the second electronic survey question are mapped the domain classification. Long teaches, (the digital survey system 118 may use any of the machine-learning models mentioned above as the survey-creation-machine learner 200. [0058],
the survey-creation-machine learner 200 determines a semantic meaning of the training survey question 302a. [0060], the survey-creation-machine learner 200 comprises an RNN that determines a probability score that a given training survey question belongs in a same category as a candidate-representative-survey question, [0062].
Claim 13 is rejected for reasons corresponding to those provided for Claim 4. In this claim, the addition of one or more non-transitory computer readable media, and at least one processor, does not change the rational for the rejections under 35 U.S.C § 103 or the referenced prior art (Long teaches the system comprises non-transitory computer readable media [0005], and at least one electronic processor, [0040].
Regarding claim 5, The computer-implemented method of claim 3, wherein determining the plurality of domain classifications comprises determining, utilizing the machine-learning model, Long teaches, (the digital survey system 118 may use any of the machine-learning models mentioned above as the survey-creation-machine learner 200, [0058]), that the first electronic survey question and the second electronic survey question are mapped to the domain classification, (the survey-creation-machine learner 200 comprises an RNN that determines a probability score that a given training survey question belongs in a same category as a candidate-representative-survey question, [0062]), in response to determining that the first electronic survey question and the second electronic survey question comprise the semantically similar content, (the survey-creation-machine learner 200 optionally comprises an RNN that compares an intent classification of the training survey question 302a to the intent classification of multiple candidate-representative-survey questions, [0063]).
Regarding claim 6, The computer-implemented method of claim 3, wherein modifying the electronic survey comprises selecting the first electronic survey question to include in the modified electronic survey in response to determining that the first electronic survey question and the second electronic survey question are mapped to the domain classification and comprise the semantically similar content. Long teaches, (a survey-creation-machine learner to identify textual features of the initial survey question and to select a representative survey question based on the identified features, [0006]. For instance, the survey-creation-machine learner 200 optionally comprises an RNN that compares an intent classification of the training survey question 302a to the intent classification of multiple candidate-representative-survey questions, [0063], Based on the representative survey question, the systems use the survey-creation-machine learner to determine a suggested survey question, [0006],
Regarding claim 8, The computer-implemented method of claim 2, wherein determining the plurality of domain classifications comprises:
determining that the first electronic survey question is mapped to the domain classification from a first electronic survey; and
determining that the second electronic survey question is mapped to the domain
classification from a second electronic survey, Long teaches, (the digital survey system 118 determines suggested survey questions that correspond to one or more survey categories as recommendations. In some such embodiments, the digital survey system 118 identifies a survey category for each survey question from within the correlation database, [0076].
Regarding claim 15, The system of claim 10, wherein the at least one processor is further configured to cause the system to:
receive response data from one or more client devices in response to the modified
electronic survey; Long teaches, (the digital survey system 118 receives a response 820 from the recipient device, [0158]),
determine that the response data lacks a response to an electronic survey question of the modified electronic survey; (the digital survey system 118 uses a loss function 306 to compare candidate-representative-survey questions and ground-truth-representative-survey questions, [0065],
determine a loss based on the response data lacking the response to the electronic survey question; (the digital survey system 118 may use a variety of loss functions as a means of comparison, including, but not limited to, mean squared error, mean squared logarithmic error, mean absolute error, cross entropy loss, negative logarithmic likelihood loss, or L2 loss. For instance, in some embodiments, the digital survey system 118 uses a cross-entropy-loss function as the loss function 306 when using an RNN to determine textual similarity (e.g., by using a probability score for sentence categories). As another example, the digital survey system 118 optionally uses a mean-squared-error function as the loss function 306 when using an RNN to determine intent of training survey questions and candidate-representative-survey questions. [0065], and
modify parameters of the one or more machine-learning models to classify the electronic survey question based on the loss, (the digital survey system 118 adjusts machine-learning parameters of the survey-creation-machine learner 200 based on the loss determined from the loss function 306. For instance, the digital survey system 118 adjusts the machine-learning parameters based on an object to decrease a loss in a subsequent training iteration, Long, [0066].
Regarding claim 16, The system of claim 10, wherein the at least one processor is further configured to cause the system to:
receive response data from a plurality of client devices m response to the modified
electronic survey; Long teaches, (the digital survey system 118 receives a response 820 from the recipient device, [0158]),
determine that an electronic survey question mapped to a first domain classification by the one or more machine-learning models is associated with a second domain classification based on the response data; (For instance, the digital survey system 118 may determine a survey category for the representative survey question from a correlation database that correlates representative survey questions with survey categories, [ ], and
in some embodiments, suggested survey questions correspond to different survey categories or to no identifiable survey categories, [0054]) and
modify parameters of the one or more machine-learning models to map the electronic
survey question to the second domain classification, (the digital survey system 118 adjusts machine-learning parameters of the survey-creation-machine learner 200 based on the loss determined from the loss function 306. For instance, the digital survey system 118 adjusts the machine-learning parameters based on an object to decrease a loss in a subsequent training iteration, Long, [0066]).
Claim Rejections 35 U.S.C. §103
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 7, 9, 14, and 17, are rejected under 35 U.S.C. § 103 as being taught by Long, (US 20200074294 A1), hereafter Long, “Machine-Learning-Based Digital Survey Creation and Management,” in view of Smith, (US 20180122256 A1), hereafter Smith, “Guiding Creation of an Electronic Survey.”
Regarding claim 7, The computer-implemented method of claim 3, wherein modifying the electronic survey comprises removing the second electronic survey question from the one or more electronic surveys in response to determining that the first electronic survey question and the second electronic survey question are mapped to the domain classification and comprise the semantically similar content, Long does not teach, Smith teaches, (the
Electronic survey system 118 provide an option to accept or reject a suggested electronic survey question, including a suggested portion of an electronic survey question, [0120], and
in some embodiments, the electronic survey system 118 filters electronic survey questions to identify and eliminate duplicate or near-duplicate electronic survey questions within a list of identified suggested electronic survey questions, [0099].
Long and Smith are both considered to be analogous to the claimed invention because they are both in the field of survey development and refinement. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the question development techniques of Long with the question removal based on duplication of Smith to avoid providing the same or similar suggested electronic survey questions, [0099].
Regarding claim 9, The computer-implemented method of claim 1, wherein providing the modified electronic survey comprises providing the first electronic survey question with the modified electronic survey and excluding the second electronic survey question from the modified electronic survey, Long does not teach, Smith teaches, (the electronic survey system 118 apply NLP to electronic survey questions within the database of categorized electronic survey questions—or a subset of electronic survey questions within the database of categorized electronic survey questions—to identify duplicate or near-duplicate electronic survey questions, [0100]).
Long and Smith are both considered to be analogous to the claimed invention because they are both in the field of survey development and refinement. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the question development techniques of Long with the question removal based on duplication of Smith to avoid providing the same or similar suggested electronic survey questions, [0099].
Regarding claim 14, The system of claim 10, wherein the at least one processor is further configured to cause the system to determine the correspondence between the first electronic survey question and the second electronic survey question by:
generating a deduplication confidence score for the first electronic survey question and the second electronic survey question based on the domain classification and semantic similarities between the first electronic survey question and the second electronic survey question; Long does not teach, Smith teaches, (the electronic survey system 118 filters electronic survey questions to identify and eliminate duplicate or near-duplicate electronic survey questions within a list of identified suggested electronic survey questions. In other embodiments, the electronic survey system 118 filters the available electronic survey questions to be used as suggested electronic survey questions within the database of categorized electronic survey questions (e.g., a pre-filter process that reduces the number of questions within the database of electronic survey questions). By filtering electronic survey questions to eliminate duplicate or near-duplicate electronic survey questions, [0099] the and
determining the correspondence between the first electronic survey question and the
second electronic survey question based on the deduplication confidence score. Long teaches, (the survey-creation-machine learner 200 comprises an RNN that determines a probability score that a given training survey question belongs in a same category as a candidate-representative-survey question, [0062], and the survey-creation-machine learner 200 optionally comprises an RNN that compares an intent classification of the training survey question 302a to the intent classification of multiple candidate-representative-survey questions, [0063]).
Regarding claim 17, The system of claim 10, wherein the at least one processor is further configured to cause the system to:
provide, for display at an administrator client device, an indication of a deduplicated
electronic survey question corresponding to the modified electronic survey; Long does not teach, Smith teaches, (the electronic survey system 118 filters electronic survey questions to identify and eliminate duplicate or near-duplicate electronic survey questions within a list of identified suggested electronic survey questions. By filtering electronic survey questions to eliminate duplicate or near-duplicate electronic survey questions, the electronic survey system 118 avoids providing the same or similar suggested electronic survey questions to a survey administrator, Smith, [0099]).
receive, from the administrator client device, feedback data indicating that the deduplicated electronic survey question was incorrectly deduplicated; Long teaches, (the digital survey system 118 uses a loss function 306 to compare candidate-representative-survey questions and ground-truth-representative-survey questions, [ ] and in some embodiments, the digital survey system 118 uses a cross-entropy-loss function as the loss function 306 when using an RNN to determine textual similarity (e.g., by using a probability score for sentence categories), [0065]. In general, the digital survey system 118 compares candidate-representative-survey questions and ground-truth-representative-survey questions as a basis for adjusting machine-learning parameters, [0064]), and
modify parameters of the one or more machine-learning models based on the feedback data, (the digital survey system 118 adjusts machine-learning parameters of the survey-creation-machine learner 200 based on the loss determined from the loss function 306. For instance, the digital survey system 118 adjusts the machine-learning parameters based on an object to decrease a loss in a subsequent training iteration, Long, [0066]).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure or directed to the state of the art is listed on the enclosed PTO-892.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL BOROWSKI whose telephone number is (703)756-1822. The examiner can normally be reached M-F 8-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O’Connor can be reached on (571) 272-6787. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at (866) 217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call (800) 786-9199 (IN USA OR CANADA) or (571) 272-1000.
/MB/
Patent Examiner, Art Unit 3624
/MEHMET YESILDAG/Primary Examiner, Art Unit 3624