DETAILED ACTION
Receipt of Applicant’s Amendment, filed October, 7, 2025 is acknowledged.
Claims 1-5, 11, 14, 16, 18 and 19 were amended.
Claims 1-20 are pending in this office action.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
With regard to claims 1, 11, and 16, claim 1 recites “receiving a selection of the first query from the query pair via the user interface; and annotating, in response to the received selection, the ranked passage according to the first candidate stance of the first query, thereby indicating an association between the ranked passage and the first candidate stance instead of the second candidate stance.” Claims 11 and 16 recite substantially similar language and are subject to the same interpretation.
Utility patents are limited by their recited structure and functionality, not recitations of intended use or Non-functional descriptive material (MPEP 2111.05 (III) NFDM) . Stating a resulting meaning (e.g. the claim language “thereby indicating”) that occurs due to a functional limitation does not impose any additional limitation on the claimed device. The above claim limitation recites the function of --receiving a selection of the first query from the user interface; and annotating the ranked passage according to the first candidate stance-- (paraphrased for brevity). The effect that results from that functionality is merely a human meaning that is derived from the presence of the annotation. The claim language “thereby indicating an association between the ranked passage and the first candidate stance instead of the second candidate stance” has been identified as being non-functional descriptive material, and therefore does not impose a functional limitation on the claimed device.
Any device which performs the functional limitation of annotating the ranked passage would result in the claimed structure that a human may reasonably derive recited meaning from (MPEP 2144.07).
Claim Objections
Claims 1-10 and 16-20 are objected to because of the following informalities. Appropriate correction is required.
With regard to claims 1 and 16, the claim recites “processing the one or more passages using a convincingness ranking model to generate a corresponding convincingness score for each passage of the one or more passages, wherein the convincingness ranking model is trained on one or more passage pairs”.
What the convincingness ranking model is ‘used to’ do reads as an intended use of the system. To be clear what the convincingness ranking model is “used to generate” does not recite any positive function of the device beyond the providing of the passages. It is suggested that the claims be amended to recite “generating, by a convincingness ranking model, a corresponding convincingness score for each passage…”
With regard to claims 1, 11 and 16, claim 1 recites “one or more passages” and “one or more passage pairs”. Within the body of the claims each of these elements are referred to merely as “passage” or “passages”. This creates ambiguity regarding when applicant is referring to one of the one or more passage pairs or one of the one or more passages. It is suggested that the claims be amended to use clearly distinct labeling, such as a passage of the one or more passages vs a training passage of the one or more training passage pairs.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
With regard to claims 1, 11, and 16, claim 1 recites “providing a ranked passage from the one or more passages using a stance annotation user interface, comprising: the ranked passage, and a query pair under the topic…” This claim limitation is ambiguous and lacks antecedent basis. What is comprising the ranked passage and query pair? One of ordinary skill in the art may reasonable read the providing of the ranked passages as comprising the ranked passage and the query pair, or may reasonable read the user interface as comprising the ranked passage and the query pair. For examination purposes this claim limitation has been construed to mean -- providing a ranked passage from the one or more passages using a stance annotation user interface, the user interface comprising: the ranked passage, and a query pair under the topic… --
With regard to claims 3, 5, 17 and 19, the claim 3 recites “labeling a passage pair with a label upon receiving the one or more passages, thereby generating the one or more passage pairs that are each pre-labeled.”. This claim limitation lacks antecedent basis. Claims 5, 17 and 19 recite substantially similar limitations, and is subject to the same rational and interpretation.
This claim ultimately depends from claim 1 which recites “pre-labeled” for the passage pair , “one or more passages”, “the passage pair”, and “a first passage”. It is unclear if applicant is referring to the previously recited elements or attempting to recite new claim elements. The fact that the passage pair that is received is “pre-labeled” raises the question regarding if the currently recited ‘labeling’ is referring to the activity which resulted in the applied pre-labels, or if the claim is reciting a new labeling operation.
When read in light of the instant specification, one or ordinary skill in the art would recognize the pre-labeled passages as training data (SPEC, ¶32). The training operation, which generates the ‘pre-labeled’ passage pairs appears to relate to Figure 3, wherein step 302 recites obtaining the training data (SPEC, ¶39), and step 310 recites applying the label (SPEC, ¶41) which one of ordinary skill in the art would recognize as the ‘pre-labeled’ passage pairs. This is recited as an entirely distinct flowchart from Figure 6a, which appears to relate to the subject matter of claim 1. Wherein the received query (topic and stance) of Step 602 relates to the received passages, and step 604 determines a convincingness score using the trained model that is generated in Figure 3. This means that the pre-labeled passage pairs are required to be prelabeled prior to the one or more passages being received, contrary to what is recited in amended claim 3 under the interpretation that the claim is directed to generating the one or more pre-labeled passage pairs.
The similarity between “one or more passages” and “one or more passage pairs” makes it unclear when applicant is referring to the training data or the data being processed. It is suggested that the claims be amended to use the distinct label of ‘training’ to differentiate between the training data (e.g. the pre-labeled passage pairs) and the data being processed (e.g. the one or more passages).
In Figure 6, step 610 of the instant specification, the training data is ‘updated/re-train’. Please note there is a distinction between stating ‘generating the one or more passage pairs that are each pre-labeled” which implies the generation of the original training data and updating said pre-labeled passage pairs, which is modifying the original training data. For examination purposes this claim limitation has been construed as referring to the updating process described in Figure 6 step 610.
Furthermore, the functionality claimed is the act of labeling. The results of that labeling is the updated set of pre-labeled passage pairs. The ‘thereby’ limitation does not impose a functional limitation on the claimed device, and instead merely recites the meaning that is derived from the act of labeling (e.g. the Intended Use of the labeling).
For examination purposes this claim limitation has been understood to mean –generating a new passage pair by labeling the one or more passages…; storing the new passage pair to a collection of passage pairs--.
With regard to claims 4, 5, 18 and 19, the claim 5 recites “disabling the selectable query pair when a predetermined time period elapses before receiving the selection”. This claim limitation lacks antecedent basis. Claims 5, 18 and 19 recite substantially similar limitations, and is subject to the same rational and interpretation.
The use of the label “selection” suggests that applicant is attempting to refer to the selection recited in claim 1. Yet, logically, if the selection button is disabled, the selection cannot be made. As such, if the button is disabled before receiving the selection, that selection cannot be made within the same instant of the device. This claim limitation creates an antecedent basis issue. To be clear, the limitations in claim 4 prevent the limitations of claim 1 from occurring.
The distinction between the selection of claim 1 and 5 is unclear and ambitious. Is the claim reciting a new selection, or referring to the previous selection?
For examination purposes this claim limitation has been construed to mean – … before receiving a new selection of either one of a first query or a second query --. Meaning that the button is disabled for a distinct selection operation, such as during a second iteration.
With regard to claims 5 and 19, claim 5 recites “providing a user interface for convincingness annotation, wherein the user interface for the convincingness annotation comprises: displaying the query, wherein the query provides the stance under the topic”. This claim limitation lacks antecedent basis. Claim 19 recites substantially similar limitations and is rejected based upon the same rational.
Claim 1 has already recited “a user interface”. The use of the same label raises the question of if applicant is referring to the same claim element or defining a new claim element. Furthermore, what the user interface is used for is a recitation of intended use of the claim element, which does not impact the functionality of the claimed device.
Claim 1 as already recited “a query pair”, but has not recited “a query”. It is unclear if applicant is referring to the previously recited “query pair” or attempting to define a new claim element.
For examination purposes this claim limitation has been construed to mean --providing a second user interface, wherein the second user interface comprises: displaying a query, wherein the query provides the stance under the topic;--.
With regard to claims 7 and 20, the claim 7 recites “comparing the convincingness score and the label of the passage pair”. When read in light of the instant specification one of ordinary skill in the art may reasonably read the label as the convincingness score, raising a question of antecedent basis between the claim elements (Paragraph [0035] “In some aspect, the labels may include a score that describes a relative level of convincingness of one passage over the other in a passage pair from the stance.”). It is suggested that the claims be amended to clarify the relationship between the label and the score.
For example, a first convincingness score and a second convincingness score. For examination purpose the label and the ‘convincingness scores’ have been interpreted as referring to the same claim element, simply distinct instances (e.g. scores in the model being compared to newly determined scores). Claim 20 recite substantially similar language and is rejected based upon the same rational.
The following is a quotation of 35 U.S.C. 112(d):
(d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph:
Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
Claim 14 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements.
With regard to claim 14, the claim recites “wherein the user interface further comprises a feedback user interface element indicating whether the passage is convincing based on the stance under the topic.”
This claim ultimately depends from claim 11, which recites “receiving a feedback through a user interface that specifies whether the determined passage is convincing based on the stance for the topic indicated by the query;”.
Stating what the feedback specifies is more limiting that merely stating what the feedback indicates. The distinction between “user interface” and “user interface element” does not provide any additional limitation to the claimed device, as one of ordinary skill in the art would recognize these phrases as synonymous. Neither the claims nor the specification have provided provide any patentable distinction in scope between a “user interface” and a “user interface element”.
Applicant has not provide any argument to assert any distinction in scope for the identified limitations of claim 11 and claim 14 within the remarks submitted October 7th, 2025.
Claim 14 does not recite any new steps or operations to be performed by the claimed method. It mere describes the feedback that has already been recited. The description of claim 14 being broader than what claim 11 requires the feedback to be. Claim 14 appears to recite similar, but boarder limitations as parent claim 11 and therefore does not further limit the parent.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-10 are rejected under 35 U.S.C. 103 as being unpatentable over Habernal [Which argument is more convincing? Analyzing and predicting convincingness of Web arguments using bidirectional LSTM] in view of Vaish [Twitch Crowdsourcing: Crowd Contributions in Short Burst of Time].
With regard to claim 1, Habernal teaches A computer-implemented method
comprising:
receiving one or more passages as the text (Habernal, Page 1591, Section 3.1 “First, we must be sure that the obtained texts are actual arguments. Second, the context of the argument should be known (the prompt and the stance).”; Figure 1, see the text which make up the actual arguments 1 and 2);
processing the one or more passages using a convincingness ranking model as the BLSTM (Habernal, Page 1590; Section 1 “We propose a novel task of predicting convincingness of arguments in an argument pair, as well as ranking arguments related to a certain topic. Since no data for such a task are available, we create a new annotated corpus. We employ SVM model with rich linguistic features as well as bidirectional Long Short-Term Memory (BLSTM) neural networks because of their excellent performance across various end-to-end NLP tasks (Goodfellow et al., 2016; Piech et al., 2015; Wen et al., 2016; Dyer et al., 2015; Rocktaschel et al., 2016).”), to generate a corresponding convincingness score as the assigned real value score, e.g. the weights (Page 1596 Section 4.2 “We address this problem as a regression task. We use the UKPConvArgRank data, in which a real value score is assigned to each argument so the arguments can be ranked by their convincingness (for each topic independently).”; Page 1593 Section 3.4.2 “Then the weight w of edge ei is computed as follows: [Equation 1]) for each passage of the one or more passages (Page 1593 Section 3.4.2 “Argument pair weights”), wherein the convincingness ranking model is trained (Habernal, Page 1594 Section 3.4.3 “We also release the full dataset UKPConvArgAll. In this data, no global filtering using graph construction methods is applied, only the local pre-filtering using MACE.We believe this dataset can be used as a supporting training data for some tasks that do not rely on the property of total ordering.”) on one or more passage pairs as the UKPConvArgAll data set (Id; Please note this claim limitation has been interpreted as --training passage pairs-- as per the 112b issue above), wherein each passage pair is pre-labeled as annotated pairs of arguments, e.g. the real value scores assigned to each argument so the arguments can be ranked by their convincingness (Habernal, Page 1590 Section 2, “Our newly created corpus of annotated pairs of arguments might resemble recent large-scale corpora for textual inference”; Page 1596 Section 4.2 “We address this problem as a regression task. We use the UKPConvArgRank data, in which a real value score is assigned to each argument so the arguments can be ranked by their convincingness (for each topic independently).”; Please note this claim limitation has been interpreted in light of Paragraph [0035] as a score) with an indication of convincingness (Id; Please note this claim limitation has been identified as an intended use of the pre-labels, as well as non-functional descriptive material. The meaning that a human may derive from the pre-label does not invoke a functionality relationship on the claimed device. Within the device this label appears to have been human provided intelligence (See Paragraph [0035])) relative to respective passage (Habernal, Page 1592 Section 3.4.2 “Considering Al is more convincing than A2 as a binary relation R, we thus asked the following research question: Is convincingness a measure with total strict order or strict weak order? Namely, is relation R that compares convincingness of two arguments transitive, antisymmetric, and total?”; Page 1594 Section 3.4.3 “We also release the full dataset UKPConvArgAll. In this data, no global filtering using graph construction methods is applied, only the local pre-filtering using MACE. We believe this dataset can be used as a supporting training data for some tasks that do not rely on the property of total ordering.”; Page 1595 Section 4.1 “pre-trained word embeddings”) of the passage pair as A1 vs A2 (Id; Please note this claim limitation has been interpreted as –the training passage pairs -- as per the 112b above) prior to (Please note this claim limitation has been interpreted to mean the labels are applied to the training passage pairs prior to the training of the model) being used to train (Habenal; Page 1594 Section 3.4.3 “We also release the full dataset UKPConvArgAll. In this data, no global filtering using graph construction methods is applied, only the local pre-filtering using MACE. We believe this dataset can be used as a supporting training data for some tasks that do not rely on the property of total ordering.”; Page 1595 Section 4.1 “pre-trained word embeddings”) the convincingness ranking model as the UKPConvArgAll is a labeled training set that has been pre-filtered and is believed to be useful as training data (Habernal, Page 1594 Section 3.4.3 “We also release the full dataset UKPConvArgAll. In this data, no global filtering using graph construction methods is applied, only the local pre-filtering using MACE. We believe this dataset can be used as a supporting training data for some tasks that do not rely on the property of total ordering.”);
ranking the one or more passages (Page 1596 Section 4.2 “We address this problem as a regression task. We use the UKPConvArgRank data, in which a real value score is assigned to each argument so the arguments can be ranked by their convincingness (for each topic independently).”) based on one or more convincingness scores as their assigned scores, e.g. the computed weights (Id; Page 1593 Section 3.4.2 “Then the weight w of edge ei is computed as follows: [Equation 1]);
providing a ranked passage from the one or more passages (Id), using a stance annotation (Habernal, Page 1592 Figure 2; Please note this claim limitation has been interpreted as an intended use of the claimed interface. What the user interface is used for does not change the functionality of the claimed device. For examination purposes this claim limitation has been rad in light of Paragraph [0073] of the original specification) [[ (Habernal, Page 1589 Figure 1; Page 1591 Section 3.2 “In the HIT, workers were presented with an argument pair, the prompt, and the stance as in Figure 1; Page 1591 Section 3.2 “In the HIT, workers were presented with an argument pair, the prompt, and the stance as in Figure 1. They had to choose either "Al is more convincing than A2" (Al> A2), "Al is less convincing than A2" (Al <A2), or "Al and A2 are convincing equally" (Al=A2). Moreover, they were obliged to write the reason 30-140 characters long. An example of fully annotated argument pair is shown in Figure 2.”Page 1592 Figure 2) comprising:
the ranked passage as argument 1 (Habernal, Page 1589, Figure 1) and
a query pair (Habernal, Page 1591 Section 3.2 “In the HIT, workers were presented with an argument pair, the prompt, and the stance as in Figure 1.) under a topic (Habernal, Page 1591, Section 3.1 “First, we must be sure that the obtained texts are actual arguments. Second, the context of the argument should be known (the prompt and the stance).”) the query pair comprising a first query indicating a first candidate stance as the ‘yes’ Topic for each prompt, e.g. “should physical edu. Be mandatory? Yes” (Habernal, 1596 Table 3; Please note this claim limitation has been read in light of Figure 5 where the prompt is the same, and the stance is a positive or negative answer for the prompt) and a second query indicating a second candidate stance as the ‘no’ Topic for each prompt, e.g. “should physical edu. Be mandatory? No” (Habernal, 1596 Table 3; Please note this claim limitation has been read in light of Figure 5 where the prompt is the same, and the stance is a positive or negative answer for the prompt), wherein each query is interactively selectable as the user’s choosing a reasoning (Habernal, Page 1591 Section 3.2 “In the HIT, workers were presented with an argument pair, the prompt, and the stance as in Figure 1. They had to choose either "Al is more convincing than A2" (Al> A2), "Al is less convincing than A2" (Al <A2), or "Al and A2 are convincing equally" (Al=A2). Moreover, they were obliged to write the reason 30-140 characters long. An example of fully annotated argument pair is shown in Figure 2.”Page 1592 Figure 2);
receiving a selection of the first query from the query pair via the user interface as the user choosing A1>A2 (Id); and
annotating, in response to the received selection as the user choosing A1>A2 (Id), the ranked passage according to the first candidate stance of the first query (Habernal, Page 1591 Section 3.2 “All 16,927 argument pairs were annotated by five workers each (85k assignments in total). We also allowed workers to express their own standpoint toward the topics.”), thereby indicating an association between the ranked passage (habernal, Page 1594 Section 3.4.3 “This allows us to rank all arguments for a particular topic.”) and the first candidate stance as the selected more convincing argument of the argument pairs (Habernal, Page 1591 Section 3 “Given two arguments, one should be selected as more convincing”) instead of the second candidate stance as the non-selected less convincing argument (Id).
Habernal does not explicitly teach providing a user interface for stance annotation… wherein the user interface for stance annotation comprises: … displaying… a query pair …enabling either one of the query pair being interactively selectable; and receiving the selection of the selectable one of the query pair.
Vaish teaches providing a ranked passage from the one or more passages ((Vaish, Page 3645 “Photo Ranking captures users’ opinion between two photographs. In formative work with product designers, we found that they require stock photos for mockups, but stock photo sites have sparse ratings. Likewise, computer vision needs more data to identify high-quality images from the web. Photo Ranking (Figure 1) asks users to swipe to choose the better of two stock photos on a theme, or contribute their own through their cell phone camera.”), using a stance annotation a user interface (Vaish, Page 3645 Introduction “To engage a wider set of crowdsourcing contributors, we introduce twitch crowdsourcing: interfaces that encourage contributions of a few seconds at a time. Taking advantage of the common habit of turning to mobile phones in spare moments [24], we replace the mobile phone unlock screen with a brief crowdsourcing task, allowing each user to make small, compounded volunteer contributions over time”) comprises:
the ranked passage as the two objects users are asked to provide an opinion between (Vaish, Page 3645 “Photo Ranking captures users’ opinion between two photographs. In formative work with product designers, we found that they require stock photos for mockups, but stock photo sites have sparse ratings. Likewise, computer vision needs more data to identify high-quality images from the web. Photo Ranking (Figure 1) asks users to swipe to choose the better of two stock photos on a theme, or contribute their own through their cell phone camera.”) and
a query pair as the two photos (Id) under a topic as the theme (id) the query pair …, wherein each query is interactively selectable as the user is asked to swipe to choose between the options (Id);
receive a selection of the first query from the query pair via the user interface (Vaish, Page 3646 Introduction “After making a selection, Twitch users can see whether their peers agreed with their selection. In addition, they can see how their contribution is contributing to the larger whole, for example aggregate responses on a map (Figure 2) or in a fact database (Figure 5).”)…
It would have been obvious to one of ordinary skill to which said subject matter pertains at the time the invention was filed to have implemented the crowdsourcing taught by Habernal (Page 1591 Section 3.2) to include not only workers, but to be expanded to include volunteer contributors using the Twitch Crowdsourcing interface taught by Vaish as it yields the predictable results of engaging a wider set of crowdsourcing contributors, which can operate in vary short time periods at low cognitive load (Vaish, Page 3645 Introduction “In contrast, existing mobile crowdsourcing platforms (e.g., [12,16,22]) tend to assume long, focused runs of work. Our design challenge is thus to create crowdsourcing tasks that operate in very short time periods and at low cognitive load.”).
With regard to claim 2 the proposed combination further teaches wherein the one or more passages (Habernal, Page 1591; Section 3.1 “We will use the following terminology. We use topic to refer to a subset of an on-line debate with a given prompt and a certain stance (for example, "Should physical education be mandatory in schools? - yes" is considered as a single topic). Each debate has two topics, one for each stance. Argument is a single comment directly addressing the debate prompt. Argument pair is an ordered set of two arguments (Al and A2) belonging to the same topic; see Figure 1.”) comprise a query as the prompt (Id), wherein the query comprises the topic as the topic (Id), wherein the query specifies a third stance as the stance (Id; See the various stances for the questions under the “topic” category in Table 2 Page 1596), wherein the one or more passages (Habernal Page 1591 Section 3.1; Page 1596 Table 2) each provide argument as argument 1 and 2 (Habernal, Figure 1, 2, and Table 2) based on the third stance as the stance (Id) under the topic as the topic (Id), and wherein the convincingness ranking model (Habernal, Page 1590; Section 1 “We propose a novel task of predicting convincingness of arguments in an argument pair, as well as ranking arguments related to a certain topic. Since no data for such a task are available, we create a new annotated corpus. We employ SVM model with rich linguistic features as well as bidirectional Long Short-Term Memory (BLSTM) neural networks because of their excellent performance across various end-to-end NLP tasks (Goodfellow et al., 2016; Piech et al., 2015; Wen et al., 2016; Dyer et al., 2015; Rocktaschel et al., 2016).”) is based on a neural network as neural networks (Id).
With regard to claim 3 the proposed combination further teaches Labeling as adding the annotations to the pairs of arguments, e.g. the real value scores assigned to each argument so the arguments can be ranked by their convincingness (Habernal, Page 1590 Section 2, “Our newly created corpus of annotated pairs of arguments might resemble recent large-scale corpora for textual inference”; Page 1596 Section 4.2 “We address this problem as a regression task. We use the UKPConvArgRank data, in which a real value score is assigned to each argument so the arguments can be ranked by their convincingness (for each topic independently).”) Please note the training data may be updated using the weights generated (Habernal, Page 1494 Section 3.4.3 “We believe this dataset can be used as a supporting training data for some tasks that do not rely on the property of total ordering.”; Page 1595 Section 4.1 “the embedding weights are further updated during training.”) a passage pair as the argument pairs (Habernal, Page 1591; Section 3.2 “An example of fully annotated argument pair is shown in Figure 2.”) with a label as the annotation such as the ‘gold label’ (Habernal Page 1591 Section 3.2 “Gold reason is a reason whose label matches the gold label in the argument pair (see Figure 2).”; Page 1593 Section 3.4.2 “We thus compute a weight for each argument pair. Let ei be a particular annotation pair (edge). Let Gi be all labels in that pair that match the predicted gold label, and Oi opposite labels (different from the gold label).”) upon receiving the one or more passages (Habernal, Page 1591, Section 3.1 “First, we must be sure that the obtained texts are actual arguments. Second, the context of the argument should be known (the prompt and the stance).”; Figure 1, see the text which make up the actual arguments 1 and 2; Please note for examination purposes this has been construed as –the one or more passages--), thereby generating the one or more passage pairs as the UKPConvArgAll data set (Habernal, Page 1594 Section 3.4.3 “We also release the full dataset UKPConvArgAll. In this data, no global filtering using graph construction methods is applied, only the local pre-filtering using MACE. We believe this dataset can be used as a supporting training data for some tasks that do not rely on the property of total ordering.”) that are each pre-labeled as annotated pairs of arguments, e.g. the real value scores assigned to each argument so the arguments can be ranked by their convincingness (Habernal, Page 1590 Section 2, “Our newly created corpus of annotated pairs of arguments might resemble recent large-scale corpora for textual inference”; Page 1596 Section 4.2 “We address this problem as a regression task. We use the UKPConvArgRank data, in which a real value score is assigned to each argument so the arguments can be ranked by their convincingness (for each topic independently)”).
wherein the passage pair as argument pair (Habernal, Page 1591; Section 3.1 “From each topic we automatically sampled 25-35 random arguments and created ( n * ( n - 1) / 2) argument pairs by combining all selected arguments.”; Please note this claim limitation has been construed to mean –the passage pair--) comprises a first passage as A1 (Habernal, Page 1591 Section 3.1 “The order of arguments Al and A2 in each argument pair was randomly shuffled.”; Please note this claim limitation has been construed to mean –the first passage--) and a second passage as A2 (Id) storing the passage pair to a collection of passage pairs (Habernal, Page 1590 Section 2 “Our newly created corpus of annotated pairs of arguments might resemble recent large-scale corpora for textual inference”; Page 1594 Section 3.4.3 “call this corpus UKPConvArgStrict.”);
generating a directed graph (Habernal, Page 1593 Algorithm 1: “Building DAG from sorted argument pairs.”), wherein the directed graph comprises a first node representing the first passage as node A for argument A (Habernal, Page 1592 Section 3.4.2 “In particular, does is exhibit properties such that if A≥B and B≥C, then A≥C (total ordering)? We can treat arguments as nodes in a graph and argument pairs as graph edges. We will denote such graph as argument graph (and use nodes/arguments and edges/pairs interchangeably in this section).7” Note 7: “Argument pair A> B becomes a directed edge A→B”), a second node representing the second passage as node B for argument B (Id), a third node representing a third passage as node C for argument C (Id), a first directed edge from the first node to the second node based on the labeling of the passage pair as the directed edge A→B (Id), a second directed edge from the second node to the third node as B≥C will become B→C (Id), and a third directed edge from the third node to the first node as A≥C becomes A→C (Id) but where a cycle is introduced, e.g. C→A (Habernal, Page 1593, Section 3.4.2 “We use Johnson's algorithm for finding all elementary cycles in DAG (Johnson, 1975).”; “By building argument graph from all pairs, introducing cycles into the graph seems to be inevitable”; In Algorithm, 1 “if hasCycles(g) then”);
detecting a cyclic relationship when the graph forms a loop (Habernal, Page 1593, Section 3.4.2 “We use Johnson's algorithm for finding all elementary cycles in DAG (Johnson, 1975).”; In Algorithm, 1 “if hasCycles(g) then”); and
removing the third node representing the third passage from the directed graph and the collection of passage pairs as a training set of passages (Habernal, Page 1593 Algorithm 1 “report about breaking DAG”; Page 1594 Section 3.4. 3 “We discard the equal argument pairs in advance and filter out argument pairs that break the DAG properties.”).
With regard to claim 4, the proposed combination further teaches
disabling the selectable as automatic skipping/exiting the task (Vaish, Page 3648 “If users do not wish to answer a question, they may skip Twitch by selecting ‘Exit’ via the options menu. This design decision has been made to encourage the user to give Twitch an answer, which is usually faster than exiting. Future designs could make it easier to skip a task, for example through a swipe-up.”) query pair as the choices (Vaish, Page 3647; “Twitch crowdsourcing interactions are very brief, so it is important that users can complete the tasks extremely quickly. Likewise, crowdsourcing is not the user’s primary task, so these tasks must be lightweight and not distracting. … Each task involves a choice between two to six options through a single motion such as a tap or swipe.”) when a predetermined time period elapses before receiving a selection (Vaish, Page 3646 “The median Census task unlock took 1.6 seconds,”; Page 3647 “Quick completion: contribute within a couple of seconds”).
With regard to claim 5 the proposed combination further teaches providing a user interface as the HIT interface implemented within the twitch crowdsourcing interface (Habernal, Page 1591 “In the HIT, workers were presented with an argument pair, the prompt, and the stance as in Figure 1. They had to choose either "Al is more convincing than A2" (Al> A2), "Al is less convincing than A2" (Al <A2), or "Al and A2 are convincing equally" (Al=A2).”; Vaish, Page 3645 Introduction “To engage a wider set of crowdsourcing contributors, we introduce twitch crowdsourcing: interfaces that encourage contributions of a few seconds at a time. Taking advantage of the common habit of turning to mobile phones in spare moments [24], we replace the mobile phone unlock screen with a brief crowdsourcing task, allowing each user to make small, compounded volunteer contributions over time”) for convincingness annotation (Habernal, Page 1591 “Moreover, they were obliged to write the reason 30-140 characters long. An example of fully annotated argument pair is shown in Figure 2.”; Please note this claim limitation has been construed as an intended use of the claimed user interface), wherein the user interface for the convincingness annotation comprises:
displaying the query as the prompt (Page 1589), wherein the query provides the stance as a stance (Page 1589 Figure 1; Page 1591 “We will use the following terminology. We use topic to refer to a subset of an on-line debate with a given prompt and a certain stance (for example, "Should physical education be mandatory in schools? - yes" is considered as a single topic).”) under the topic as a single topic (Id);
displaying the passage pair as Argument A1 and Argument A2 (Page 1589; Figure 1; Page 1591 “Argument pair is an ordered set of two arguments (Al and A2) belonging to the same topic; see Figure 1.”);
enabling either one of the passage pair being interactively selectable as the workers had to choose via swiping left or right (Habernal, Page 1591 “In the HIT, workers were presented with an argument pair, the prompt, and the stance as in Figure 1. They had to choose either "Al is more convincing than A2" (Al> A2), "Al is less convincing than A2" (Al <A2), or "Al and A2 are convincing equally" (Al=A2).”; Vaish, Page 3645 “Photo Ranking captures users’ opinion between two photographs. In formative work with product designers, we found that they require stock photos for mockups, but stock photo sites have sparse ratings. Likewise, computer vision needs more data to identify high-quality images from the web. Photo Ranking (Figure 1) asks users to swipe to choose the better of two stock photos on a theme, or contribute their own through their cell phone camera.”);
receiving selection of either one of the passage pair as the user making a selection, e.g. choosing A1>A2 (Habernal, Page 1591 “In the HIT, workers were presented with an argument pair, the prompt, and the stance as in Figure 1. They had to choose either "Al is more convincing than A2" (Al> A2), "Al is less convincing than A2" (Al <A2), or "Al and A2 are convincing equally" (Al=A2).”; Vaish, Page 3646 Introduction “After making a selection, Twitch users can see whether their peers agreed with their selection. In addition, they can see how their contribution is contributing to the larger whole, for example aggregate responses on a map (Figure 2) or in a fact database (Figure 5).”); and
performing the labeling as adding the annotations to the pairs of arguments, e.g. the real value scores assigned to each argument so the arguments can be ranked by their convincingness (Habernal, Page 1590 Section 2, “Our newly created corpus of annotated pairs of arguments might resemble recent large-scale corpora for textual inference”) based on the selection of either one of the passage pair (Habernal, Page 1591 Section 3.2 “All 16,927 argument pairs were annotated by five workers each (85k assignments in total). We also allowed workers to express their own standpoint toward the topics.”).
With regard to claim 6 the proposed combination further teaches collecting the one or more passages (Habernal, Page 1591, Section 3.1 “First, we must be sure that the obtained texts are actual arguments. Second, the context of the argument should be known (the prompt and the stance).”) comprises scraping data as a common crawl used to obtain tokens of word embeddings (Habernal, Page 1595 Section 4.1 “The input layer relies on pre-trained word embeddings, in particular GloVe (Pennington et al., 2014) trained on 840B tokens from Common Crawl10” Superscript 10: “10http://nlp.stanford.edu/projects/glove/”) from at least one of web content (Habernal, Page 1591 Section 3.1 “Sampling large sets of arguments for annotation from the Web poses several challenges. First, we must be sure that the obtained texts are actual arguments. Second, the context of the argument should be known (the prompt and the stance). Finally, we need sources with permissive licenses, which allow us to release the resulting corpus further to the community. These criteria are met by arguments from two debate portals.3”) at a website (Id; Subscript 3: “Namely, createdebate. com and procon. org.”) or an electronic document (Page 1591 “retrieved document are ranked according to their relevance and pairs of documents are automatically sampled”) from a document store based on relevancy as relevance (Id) of the data with the topic (Page 1591; Section 3.1 “We use topic to refer to a subset of an on-line debate with a given prompt”).
With regard to claim 7 the proposed combination further teaches providing the passage pair to the convincingness ranking model (Habernal, Page 1590; Section 1 “We propose a novel task of predicting convincingness of arguments in an argument pair, as well as ranking arguments related to a certain topic. Since no data for such a task are available, we create a new annotated corpus. We employ SVM model with rich linguistic features as well as bidirectional Long Short-Term Memory (BLSTM) neural networks because of their excellent performance across various end-to-end NLP tasks (Goodfellow et al., 2016; Piech et al., 2015; Wen et al., 2016; Dyer et al., 2015; Rocktaschel et al., 2016).”);
determining a convincingness score (Habernal, Page 1593 Section 3.4.2 “We thus compute a weight for each argument pair.”; Page 1596 Section 4.2 “We address this problem as a regression task. We use the UKPConvArgRank data, in which a real value score is assigned to each argument so the arguments can be ranked by their convincingness (for each topic independently). The task is thus to predict a real-value score for each argument from the test topic (remember that we use 32-fold cross validation).”; Please note this claim limitation has been construed as referring to the labels which have been reapplied to the passage pair) based on the passage pair as A1 and A2 (Habernal, Page 1592 Section 3.4.2) ;
comparing the convincingness score and the label of the passage pair (Habernal, Page 1592 Section 3.4.2 “Considering A1 is more convincing than A2 as a binary relation R, we thus asked the following research question: Is convincingness a measure with total strict order or strict weak order? Namely, is relation R that compares convincingness of two arguments transitive, antisymmetric, and total?”; Please note this claim limitation has been construed in view of Paragraph [0035] of the original specification which details comparing the convincingness of one passage over the other to obtain the label, e.g. score); and
updating the convincingness ranking model based on comparison between the convincingness score and the label of the passage pair (Page 1595 “the embedding weights are further updated during training.”).
With regard to claim 8 Habernal teaches wherein the convincingness ranking model as the BLSTM (Habernal, Page 1590; Section 1 “We propose a novel task of predicting convincingness of arguments in an argument pair, as well as ranking arguments related to a certain topic. Since no data for such a task are available, we create a new annotated corpus. We employ SVM model with rich linguistic features as well as bidirectional Long Short-Term Memory (BLSTM) neural networks because of their excellent performance across various end-to-end NLP tasks (Goodfellow et al., 2016; Piech et al., 2015; Wen et al., 2016; Dyer et al., 2015; Rocktaschel et al., 2016).”) is based on a feed forward neural network as BLSTM is inherently a neural network that has both feed forwarding and back propagation and SVM model is inherently a feed forward ML model (Id; Page 1595 “The core of the model consists of two bi-directional LSTM networks with 64 output neurons each.”) with back propagation as BiLSTM has Back propagation (Id) for updating (Page 1595 “the embedding weights are further updated during training.”) the convincingness ranking model with error correction (Page 1595 “We train the network with ADAM optimizer (Kingma and Ba, 2015) using binary cross-entropy loss function and regularize by early stopping (5 training epochs) and high drop-out rate (0.5) in the dropout layer.”; Page 1596 “we only replace the output layer with a linear activation function and optimize mean absolute error loss”).
With regard to claim 9 the proposed combination further teaches wherein the feed forward neural network as the BLSTM (Habernal, Page 1590; Section 1 “We propose a novel task of predicting convincingness of arguments in an argument pair, as well as ranking arguments related to a certain topic. Since no data for such a task are available, we create a new annotated corpus. We employ SVM model with rich linguistic features as well as bidirectional Long Short-Term Memory (BLSTM) neural networks because of their excellent performance across various end-to-end NLP tasks (Goodfellow et al., 2016; Piech et al., 2015; Wen et al., 2016; Dyer et al., 2015; Rocktaschel et al., 2016).”) comprises an initial layer for receiving the passage pair as the input layer receiving pre-trained word embeddings (Habernal, Page 1595 Section 4.1 “The input layer relies on pre-trained word embeddings, in particular Glo Ve (Pennington et al., 2014) trained on 840B tokens from Common Crawl”), a plurality of hidden layers as the core networks of the bidirectional model (Habernal, Page 1595 Section 4.1 “The core of the model consists of two bi-directional LSTM networks with 64 output neurons each.”) with a descending order of dimensions (Habernal, Page 1593 “Finally, the Desc algorithm sorts the pairs given their weight in descending order (the "better" estimates come first).”), a last layer for generating the convincingness score as the final sigmoid layer for binary predictions (Habernal, Page 1595 Section 4.1 “Their output is then concatenated into a single drop-out layer and passed to the final sigmoid layer for binary predictions.”), and a generator of new weights (Habernal, Page 1595 Section 4.1 “the embedding weights are further updated during training”) for the back propagation to the feed forward neural network as the bi-directional LSTM networks (Habernal, Page 1595 Section 4.1 “The core of the model consists of two bi-directional LSTM networks with 64 output neurons each”) when the convincingness score and the label of the passage pair are distinct as no equivalency pairs (Habernal Page 1595, See Table 1 right Column results; Page 1592 Section 3.4.2 “Considering A1 is more convincing than A2 as a binary relation R, we thus asked the following research question: Is convincingness a measure with total strict order or strict weak order? Namely, is relation R that compares convincingness of two arguments transitive, antisymmetric, and total?”; Please note this claim limitation has been construed in view of Paragraph [0035] of the original specification which details comparing the convincingness of one passage over the other to obtain the label, e.g. score).
With regard to claim 10, the proposed combination further teaches wherein the labeled passage pair as annotated pairs of arguments, e.g. the real value scores assigned to each argument so the arguments can be ranked by their convincingness (Habernal, Page 1590 Section 2, “Our newly created corpus of annotated pairs of arguments might resemble recent large-scale corpora for textual inference”; Page 1596 Section 4.2 “We address this problem as a regression task. We use the UKPConvArgRank data, in which a real value score is assigned to each argument so the arguments can be ranked by their convincingness (for each topic independently).”) comprises the topic as the topic (Habernal, Page 1591 Section 3.1 “We will use the following terminology. We use topic to refer to a subset of an on-line debate with a given prompt and a certain stance (for example, "Should physical education be mandatory in schools? - yes" is considered as a single topic). Each debate has two topics, one for each stance. Argument is a single comment directly addressing the debate prompt. Argument pair is an ordered set of two arguments (Al and A2) belonging to the same topic; see Figure 1.” Figure 1 and Figure 2, see the Stance “Yes”), the stance as the stance (Id), the passage pair as A1 and A2 (Id), and a convincingness annotation as the label as the annotations provided by the user (Id) of the passage pair as A1 and A2 (Id).
Claims 11-15 are rejected under 35 U.S.C. 103 as being unpatentable over Habernal in view of Skrenta [2013/0159251].
With regard to claim 11 Habernal teaches A computer-implemented method for as what the method is to be used for is an intended use of the claimed method providing a passage as passages A1 and A2 (Habernal, Page 1591 Section 3.1 “We will use the following terminology. We use topic to refer to a subset of an on-line debate with a given prompt and a certain stance (for example, "Should physical education be mandatory in schools? - yes" is considered as a single topic). Each debate has two topics, one for each stance. Argument is a single comment directly addressing the debate prompt. Argument pair is an ordered set of two arguments (Al and A2) belonging to the same topic; see Figure 1.” Figure 1 and Figure 2, see the Stance “Yes”) with convincingness as the user provided annotations (Id) based on a stance as the stance (Id) within a topic as the topic (Id), the computer-implemented method comprising:
[[as the question(Habernal, Page 1591 Section 3.1 “We will use the following terminology. We use topic to refer to a subset of an on-line debate with a given prompt and a certain stance (for example, "Should physical education be mandatory in schools? - yes" is considered as a single topic). Each debate has two topics, one for each stance. Argument is a single comment directly addressing the debate prompt. Argument pair is an ordered set of two arguments (Al and A2) belonging to the same topic; see Figure 1.” Figure 1 and Figure 2, see the Stance “Yes”) comprises the topic as the topic (Id) and the stance as the stance (Id);
determining a passage as argument, for example A1 or A2 (Habernal, Page 1591 Section 3.1 “We will use the following terminology. We use topic to refer to a subset of an on-line debate with a given prompt and a certain stance (for example, "Should physical education be mandatory in schools? - yes" is considered as a single topic). Each debate has two topics, one for each stance. Argument is a single comment directly addressing the debate prompt. Argument pair is an ordered set of two arguments (Al and A2) belonging to the same topic”), [[(Habernal, Page 1596; Section 4.2 “We address this problem as a regression task. We use the UKPConvArgRank data, in which a real value score is assigned to each argument so the arguments can be ranked by their convincingness (for each topic independently).”) based on the topic as a single topic (Id; Figure 1, see the Prompt-stance) and the stance as the stance (Habernal, Page 1591 Section 3.1 “We will use the following terminology. We use topic to refer to a subset of an on-line debate with a given prompt and a certain stance (for example, "Should physical education be mandatory in schools? - yes" is considered as a single topic). Each debate has two topics, one for each stance. Argument is a single comment directly addressing the debate prompt. Argument pair is an ordered set of two arguments (Al and A2) belonging to the same topic;”) among a collection of passages as the UKPConvArgRank Data (Habernal, Page 1596; Section 4.2 “We address this problem as a regression task. We use the UKPConvArgRank data, in which a real value score is assigned to each argument so the arguments can be ranked by their convincingness (for each topic independently).”) based on the stance as the stance (Habernal, Page 1591 Section 3.1 “We will use the following terminology. We use topic to refer to a subset of an on-line debate with a given prompt and a certain stance (for example, "Should physical education be mandatory in schools? - yes" is considered as a single topic). Each debate has two topics, one for each stance. Argument is a single comment directly addressing the debate prompt. Argument pair is an ordered set of two arguments (Al and A2) belonging to the same topic;”) under the topic as the topic (Id);
providing the determined passage to a trained convincingness ranking model as the BLSTM (Habernal, Page 1590; Section 1 “We propose a novel task of predicting convincingness of arguments in an argument pair, as well as ranking arguments related to a certain topic. Since no data for such a task are available, we create a new annotated corpus. We employ SVM model with rich linguistic features as well as bidirectional Long Short-Term Memory (BLSTM) neural networks because of their excellent performance across various end-to-end NLP tasks (Goodfellow et al., 2016; Piech et al., 2015; Wen et al., 2016; Dyer et al., 2015; Rocktaschel et al., 2016).”), wherein the trained convincingness ranking model is trained (Habernal, Page 1594 Section 3.4.3 “We also release the full dataset UKPConvArgAll. In this data, no global filtering using graph construction methods is applied, only the local pre-filtering using MACE.We believe this dataset can be used as a supporting training data for some tasks that do not rely on the property of total ordering.”) with training data as the supporting training data (Id) comprising one or more passage pairs as the UKPConvArgAll data set (Id; Please note this claim limitation has been interpreted as --training passage pairs-- as per the 112b issue above), herein each passage pair is pre-labeled as annotated pairs of arguments, e.g. the real value scores assigned to each argument so the arguments can be ranked by their convincingness (Habernal, Page 1590 Section 2, “Our newly created corpus of annotated pairs of arguments might resemble recent large-scale corpora for textual inference”; Page 1596 Section 4.2 “We address this problem as a regression task. We use the UKPConvArgRank data, in which a real value score is assigned to each argument so the arguments can be ranked by their convincingness (for each topic independently).”; Please note this claim limitation has been interpreted in light of Paragraph [0035] as a score) for convincingness (Id; Please note this claim limitation has been identified as an intended use of the pre-labels, as well as non-functional descriptive material. The meaning that a human may derive from the pre-label does not invoke a functionality relationship on the claimed device. Within the device this label appears to have been human provided intelligence (See Paragraph [0035])) relative to respective passages (Habernal, Page 1592 Section 3.4.2 “Considering Al is more convincing than A2 as a binary relation R, we thus asked the following research question: Is convincingness a measure with total strict order or strict weak order? Namely, is relation R that compares convincingness of two arguments transitive, antisymmetric, and total?”; Please note this claim limitation has been interpreted as --a first training passage-- as per the 112b above) of the passage pair as A1 vs A2 (Id) prior to (Please note this claim limitation has been interpreted to mean the labels are applied to the training passage pairs prior to the training of the model) being used to train the convincingness ranking model as the UKPConvArgAll is a labeled training set that has been pre-filtered and is believed to be useful as training data (Habernal, Page 1594 Section 3.4.3 “We also release the full dataset UKPConvArgAll. In this data, no global filtering using graph construction methods is applied, only the local pre-filtering using MACE. We believe this dataset can be used as a supporting training data for some tasks that do not rely on the property of total ordering.”)
receiving a feedback as workers applying annotations (Harnal, Page 1591 Section 3.2 “All 16,927 argument pairs were annotated by five workers each (85k assignments in total).”) through a user interface as the HIT interface (Habernal, Page 1591 “In the HIT, workers were presented with an argument pair, the prompt, and the stance as in Figure 1. They had to choose either "Al is more convincing than A2" (Al> A2), "Al is less convincing than A2" (Al <A2), or "Al and A2 are convincing equally" (Al=A2).”, that (Please note this claim limitation has been construed as referring to the feedback) specifies whether the determined passage is convincing based on the stance for the topic as the workers chose a particular stance for the prompt as being the most convincing (Habernal, Page 1591 “In the HIT, workers were presented with an argument pair, the prompt, and the stance as in Figure 1. They had to choose either "Al is more convincing than A2" (Al> A2), "Al is less convincing than A2" (Al <A2), or "Al and A2 are convincing equally" (Al=A2).”); and
updating the trained convincingness ranking model (Habernal, Page 1595 “the embedding weights are further updated during training.”) based on the feedback as the weight is computed based on the votes of the workers (Habernal, Page 1593 “Let v be a single worker's vote and cv a global worker's competence score. Then the weight w of edge ei is computed as follows: [See Equation 1 based on cv]).”; See Equation 1) for the determined passage as the chosen argument passage(Habernal, Page 1591 “In the HIT, workers were presented with an argument pair, the prompt, and the stance as in Figure 1. They had to choose either "Al is more convincing than A2" (Al> A2), "Al is less convincing than A2" (Al <A2), or "Al and A2 are convincing equally" (Al=A2).”) which is the ‘vote’ for the chosen by the worker that is used to update training weights= (Habernal, Page 1593 “Let v be a single worker's vote and cv a global worker's competence score. Then the weight w of edge ei is computed as follows: [See Equation 1 based on cv]).”; See Equation 1).
Habernal does not explicitly teach receiving a query… determining a passage, wherein the passage relates to a highest convincingness score.
Skrenta teaches receiving a query (Skrenta, ¶417 “In some embodiments, the user enters key words”), wherein the query comprises the topic (Skrenta, ¶417 “the key words or search terms listed as topics”);
determining a passage as a search result (Skrenta, ¶419 “is an example of a screen display 2200 of a search result”), wherein the passage relates to a highest … score (Skrenta, ¶420 “embodiments, the list of items is displayed or ranked in order of relevance for that sub-group of items.”).
It would have been obvious to one of ordinary skill to which said subject matter pertains at the time the invention was filed to have implemented the proposed device to be able to search the annotated and ranked corpus generated by Habernal using a search engine such as that taught by Skrenta as it is a normal activity within the field of art (Habernal, Page 1590 “emerging sub-field of NLP in which natural language arguments and argumentation are modeled, searched, analyzed, generated, and evaluated.”) which would yield the predictable results of allowing the relevant data to be obtained when desired.
With regard to claim 12 the proposed combination further teaches
providing a user interface a search display screen (Skrenta, ¶417 “screen display of 2100 of a search query box”), the user interface comprising:
receiving the query as the user entering the terms (Skrenta, ¶417 “Once the user enters one or more search terms, the user selects ( e.g., by clicking on) a search button 2112 to initiate a database search in”) in a first input area as the query box (Skrenta, ¶417 “screen display of 2100 of a search query box”);
receiving a user input as the user clicking the button (Skrenta, ¶417 “Once the user enters one or more search terms, the user selects ( e.g., by clicking on) a search button 2112 to initiate a database search in”)in a second input area as the search button (Id) to trigger searching as initiate a database search (Id) for the passage as the search results (Skrenta, ¶418 “FIG. 22A is an example of a screen display 2200 of a search result”), e.g. the argument (Habernal, Page 1591, Section 3.1 “First, we must be sure that the obtained texts are actual arguments. Second, the context of the argument should be known (the prompt and the stance).”); and
displaying the passage (Skrenta, ¶418 “FIG. 22A is an example of a screen display 2200 of a search result”).
With regard to claim 13 the proposed combination further teaches wherein the convincingness ranking model as the BLSTM (Habernal, Page 1590; Section 1 “We propose a novel task of predicting convincingness of arguments in an argument pair, as well as ranking arguments related to a certain topic. Since no data for such a task are available, we create a new annotated corpus. We employ SVM model with rich linguistic features as well as bidirectional Long Short-Term Memory (BLSTM) neural networks because of their excellent performance across various end-to-end NLP tasks (Goodfellow et al., 2016; Piech et al., 2015; Wen et al., 2016; Dyer et al., 2015; Rocktaschel et al., 2016).”) is based on a feed forward neural network as BLSTM is inherently a neural network that has both feed forwarding and back propagation and SVM model is inherently a feed forward ML model (Id; Page 1595 “The core of the model consists of two bi-directional LSTM networks with 64 output neurons each.”) with back propagation as BiLSTM has Back propagation (Id) for retraining (Habernal, Page 1595 “the embedding weights are further updated during training.”).
With regard to claim 14 the proposed combination further teaches wherein the user interface as the HIT interface (Habernal, Page 1591 “In the HIT, workers were presented with an argument pair, the prompt, and the stance as in Figure 1. They had to choose either "Al is more convincing than A2" (Al> A2), "Al is less convincing than A2" (Al <A2), or "Al and A2 are convincing equally" (Al=A2).”) further comprising:
a feedback user interface element as workers applying annotations (Harnal, Page 1591 Section 3.2 “All 16,927 argument pairs were annotated by five workers each (85k assignments in total).”) indicating whether the passage is convincing based on the stance under the topic as the workers had to choose (Habernal, Page 1591 “In the HIT, workers were presented with an argument pair, the prompt, and the stance as in Figure 1. They had to choose either "Al is more convincing than A2" (Al> A2), "Al is less convincing than A2" (Al <A2), or "Al and A2 are convincing equally" (Al=A2).”).
With regard to claim 15 the proposed combination further teaches wherein the feed forward neural network as the BLSTM (Habernal, Page 1590; Section 1 “We propose a novel task of predicting convincingness of arguments in an argument pair, as well as ranking arguments related to a certain topic. Since no data for such a task are available, we create a new annotated corpus. We employ SVM model with rich linguistic features as well as bidirectional Long Short-Term Memory (BLSTM) neural networks because of their excellent performance across various end-to-end NLP tasks (Goodfellow et al., 2016; Piech et al., 2015; Wen et al., 2016; Dyer et al., 2015; Rocktaschel et al., 2016).”) comprises an initial layer for receiving a labeled passage pair as the input layer receiving pre-trained word embeddings (Habernal, Page 1595 Section 4.1 “The input layer relies on pre-trained word embeddings, in particular Glo Ve (Pennington et al., 2014) trained on 840B tokens from Common Crawl”) with a label as the pre-trained (id), a plurality of hidden layers as the core networks of the bidirectional model (Habernal, Page 1595 Section 4.1 “The core of the model consists of two bi-directional LSTM networks with 64 output neurons each.”) with a descending order of dimensions (Habernal, Page 1593 “Finally, the Desc algorithm sorts the pairs given their weight in descending order (the "better" estimates come first).”), a last layer for generating the convincingness score as the final sigmoid layer for binary predictions (Habernal, Page 1595 Section 4.1 “Their output is then concatenated into a single drop-out layer and passed to the final sigmoid layer for binary predictions.”), and a generator of new weights (Habernal, Page 1595 Section 4.1 “the embedding weights are further updated during training”) for back propagation to the feed forward neural network as the bi-directional LSTM networks (Habernal, Page 1595 Section 4.1 “The core of the model consists of two bi-directional LSTM networks with 64 output neurons each”) based on comparison between the convincingness score and the label of the labeled passage pair as no equivalency pairs (Habernal Page 1595, See Table 1 right Column results; Page 1592 Section 3.4.2 “Considering A1 is more convincing than A2 as a binary relation R, we thus asked the following research question: Is convincingness a measure with total strict order or strict weak order? Namely, is relation R that compares convincingness of two arguments transitive, antisymmetric, and total?”; Please note this claim limitation has been construed in view of Paragraph [0035] of the original specification which details comparing the convincingness of one passage over the other to obtain the label, e.g. score).
Claims 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Habernal in view of Vaish and Liu [2017/0278416].
With regard to claim 16 Habernal teaches A system, the system comprising:
[[
[[
identify a topic as a single topic (Habernal, Page 1591 Section 3.1 “We will use the following terminology. We use topic to refer to a subset of an on-line debate with a given prompt and a certain stance (for example, "Should physical education be mandatory in schools? - yes" is considered as a single topic). Each debate has two topics, one for each stance. Argument is a single comment directly addressing the debate prompt. Argument pair is an ordered set of two arguments (Al and A2) belonging to the same topic; see Figure 1.” Figure 1, see the Stance “Yes”) for a stance as the stance Yes (Id);
collect passages (Habernal, Page 1591, Section 3.1 “First, we must be sure that the obtained texts are actual arguments. Second, the context of the argument should be known (the prompt and the stance).”; Figure 1, see the text which make up the actual arguments 1 and 2) that as the passages A1 and A3 (Habernal, Page 1591 Section 3.1 “We will use the following terminology. We use topic to refer to a subset of an on-line debate with a given prompt and a certain stance (for example, "Should physical education be mandatory in schools? - yes" is considered as a single topic). Each debate has two topics, one for each stance. Argument is a single comment directly addressing the debate prompt. Argument pair is an ordered set of two arguments (Al and A2) belonging to the same topic; see Figure 1.” Figure 1, see the Stance “Yes”) are relevant to a query as the prompt (Id) representing the stance as the stance (Id) under the topic as the topic (Id), the passages originating from one or more passage stores (Habernal, Page 1591 Section 3.1 “Sampling large sets of arguments for annotation from the Web poses several challenges. First, we must be sure that the obtained texts are actual arguments. Second, the context of the argument should be known (the prompt and the stance). Finally, we need sources with permissive licenses, which allow us to release the resulting corpus further to the community. These criteria are met by arguments from two debate portals.3”; Subscript 3: “Namely, createdebate. com and procon. org.”);
identify a passage pair from the passages as passages A1 and A2 (Habernal, Page 1592 Section 3.4.2 “Considering Al is more convincing than A2 as a binary relation R, we thus asked the following research question: Is convincingness a measure with total strict order or strict weak order? Namely, is relation R that compares convincingness of two arguments transitive, antisymmetric, and total?”), wherein the passage pair comprise a first passage as A1 (Id) and a second passage as A2 (Id), both the first passage and the second passage being based at least on the stance (Habernal, Page 1591 Section 3.1 “We will use the following terminology. We use topic to refer to a subset of an on-line debate with a given prompt and a certain stance (for example, "Should physical education be mandatory in schools? - yes" is considered as a single topic). Each debate has two topics, one for each stance. Argument is a single comment directly addressing the debate prompt. Argument pair is an ordered set of two arguments (Al and A2) belonging to the same topic; see Figure 1.” Figure 1, see the Stance “Yes”) under the topic (Id);
label the passage pair as the annotations added by the workers (Habernal, Page 1591 “All 16,927 argument pairs were annotated by five workers each (85k assignments in total). We also allowed workers to express their own standpoint toward the topics”; Figure 2, see the bullets), wherein the labeling relates to generating a label as an annotation (Id) of the passage pair as A1 vs A2 (Id) according to a relative level of convincingness (Habernal, Page 1592 Section 3.4.2 “Considering Al is more convincing than A2 as a binary relation R, we thus asked the following research question: Is convincingness a measure with total strict order or strict weak order? Namely, is relation R that compares convingcingness of two arguments transitive, antisymmetric, and total?”) between the first passage as A1 (Id) and the second passages as A2 (Id) based on the stance (Habernal, Page 1591 Section 3.1 “We will use the following terminology. We use topic to refer to a subset of an on-line debate with a given prompt and a certain stance (for example, "Should physical education be mandatory in schools? - yes" is considered as a single topic). Each debate has two topics, one for each stance. Argument is a single comment directly addressing the debate prompt. Argument pair is an ordered set of two arguments (Al and A2) belonging to the same topic; see Figure 1.” Figure 1, see the Stance “Yes”) under the topic (Id);
filter the first passage of the passage pair when the first passage creates a closed path sequence of convincingness ranking of the passages (Habernal, Page 1593 Algorithm 1 “report about breaking DAG”; Page 1594 Section 3.4. 3 “We discard the equal argument pairs in advance and filter out argument pairs that break the DAG properties.”);
update a convincingness ranking model using the passage pair and the label of the passage pair (Page 1595 “the embedding weights are further updated during training.”), wherein the convincingness ranking model as the BLSTM (Habernal, Page 1590; Section 1 “We propose a novel task of predicting convincingness of arguments in an argument pair, as well as ranking arguments related to a certain topic. Since no data for such a task are available, we create a new annotated corpus. We employ SVM model with rich linguistic features as well as bidirectional Long Short-Term Memory (BLSTM) neural networks because of their excellent performance across various end-to-end NLP tasks (Goodfellow et al., 2016; Piech et al., 2015; Wen et al., 2016; Dyer et al., 2015; Rocktaschel et al., 2016).”) is trained (Habernal, Page 1594 Section 3.4.3 “We also release the full dataset UKPConvArgAll. In this data, no global filtering using graph construction methods is applied, only the local pre-filtering using MACE.We believe this dataset can be used as a supporting training data for some tasks that do not rely on the property of total ordering.”) on one or more passage pairs as the UKPConvArgAll data set (Id), that (Please note this claim limitation has been construed to refer to the passage pairs) where pre-labeled as annotated pairs of arguments, e.g. the real value scores assigned to each argument so the arguments can be ranked by their convincingness (Habernal, Page 1590 Section 2, “Our newly created corpus of annotated pairs of arguments might resemble recent large-scale corpora for textual inference”; Page 1596 Section 4.2 “We address this problem as a regression task. We use the UKPConvArgRank data, in which a real value score is assigned to each argument so the arguments can be ranked by their convincingness (for each topic independently).”; Please note this claim limitation has been interpreted in light of Paragraph [0035] as a score) as a result (Page 1595 “the embedding weights are further updated during training.”) of labeling as adding the annotations to the pairs of arguments, e.g. the real value scores assigned to each argument so the arguments can be ranked by their convincingness (Habernal, Page 1590 Section 2, “Our newly created corpus of annotated pairs of arguments might resemble recent large-scale corpora for textual inference”; Page 1596 Section 4.2 “We address this problem as a regression task. We use the UKPConvArgRank data, in which a real value score is assigned to each argument so the arguments can be ranked by their convincingness (for each topic independently).”) Please note the training data may be updated using the weights generated (Habernal, Page 1494 Section 3.4.3 “We believe this dataset can be used as a supporting training data for some tasks that do not rely on the property of total ordering.”; Page 1595 Section 4.1 “the embedding weights are further updated during training.”)
provide stance annotation [[(Habernal, Page 1591 Section 3.2 “In the HIT, workers were presented with an argument pair, the prompt, and the stance as in Figure 1. They had to choose either "Al is more convincing than A2" (Al> A2), "Al is less convincing than A2" (Al <A2), or "Al and A2 are convincing equally" (Al=A2). Moreover, they were obliged to write the reason 30-140 characters long. An example of fully annotated argument pair is shown in Figure 2.”Page 1592 Figure 2; Please note this claim limitation has been interpreted as an intended use of the claimed interface. What the user interface is used for does not change the functionality of the claimed device. For examination purposes this claim limitation has been read in light of Paragraph [0073] of the original specification), comprising:
the first passage as argument 1 (Habernal, Page 1589, Figure 1) and
a query pair (Habernal, Page 1591 Section 3.2 “In the HIT, workers were presented with an argument pair, the prompt, and the stance as in Figure 1.) under the topic (Habernal, Page 1591, Section 3.1 “First, we must be sure that the obtained texts are actual arguments. Second, the context of the argument should be known (the prompt and the stance).”) the query pair comprising a first query indicating a first candidate stance as the ‘yes’ Topic for each prompt, e.g. “should physical edu. Be mandatory? Yes” (Habernal, 1596 Table 3; Please note this claim limitation has been read in light of Figure 5 where the prompt is the same, and the stance is a positive or negative answer for the prompt) and a second query indicating a second candidate stance as the ‘no’ Topic for each prompt, e.g. “should physical edu. Be mandatory? No” (Habernal, 1596 Table 3; Please note this claim limitation has been read in light of Figure 5 where the prompt is the same, and the stance is a positive or negative answer for the prompt), wherein each query is interactively selectable as the user’s choosing a reasoning (Habernal, Page 1591 Section 3.2 “In the HIT, workers were presented with an argument pair, the prompt, and the stance as in Figure 1. They had to choose either "Al is more convincing than A2" (Al> A2), "Al is less convincing than A2" (Al <A2), or "Al and A2 are convincing equally" (Al=A2). Moreover, they were obliged to write the reason 30-140 characters long. An example of fully annotated argument pair is shown in Figure 2.”Page 1592 Figure 2);
receive a selection of the first query from the query pair via the user interface as the user choosing A1>A2 (Id); and
annotate, in response to the received selection as the user choosing A1>A2 (Id), the ranked passage according to the first candidate stance of the first query (Habernal, Page 1591 Section 3.2 “All 16,927 argument pairs were annotated by five workers each (85k assignments in total). We also allowed workers to express their own standpoint toward the topics.”), thereby indicating an association between the ranked passage (habernal, Page 1594 Section 3.4.3 “This allows us to rank all arguments for a particular topic.”) and the first candidate stance as the selected more convincing argument of the argument pairs (Habernal, Page 1591 Section 3 “Given two arguments, one should be selected as more convincing”) instead of the second candidate stance as the non-selected less convincing argument (Id).
Habernal does not explicitly teach provide a user interface for stance annotation… wherein the user interface for stance annotation comprises: … displaying… a query pair …enabling either one of the query pair being interactively selectable; and receive the selection of the selectable one of the query pair.
Vaish teaches provide a stance annotation(Vaish, Page 3645 “Photo Ranking captures users’ opinion between two photographs. In formative work with product designers, we found that they require stock photos for mockups, but stock photo sites have sparse ratings. Likewise, computer vision needs more data to identify high-quality images from the web. Photo Ranking (Figure 1) asks users to swipe to choose the better of two stock photos on a theme, or contribute their own through their cell phone camera.”) user interface (Vaish, Page 3645 Introduction “To engage a wider set of crowdsourcing contributors, we introduce twitch crowdsourcing: interfaces that encourage contributions of a few seconds at a time. Taking advantage of the common habit of turning to mobile phones in spare moments [24], we replace the mobile phone unlock screen with a brief crowdsourcing task, allowing each user to make small, compounded volunteer contributions over time”) comprising:
the first passage as the two objects users are asked to provide an opinion between (Vaish, Page 3645 “Photo Ranking captures users’ opinion between two photographs. In formative work with product designers, we found that they require stock photos for mockups, but stock photo sites have sparse ratings. Likewise, computer vision needs more data to identify high-quality images from the web. Photo Ranking (Figure 1) asks users to swipe to choose the better of two stock photos on a theme, or contribute their own through their cell phone camera.”) and
a query pair as the two photos (Id) under the topic as the theme (id) the query pair …, wherein each query is interactively selectable as the user is asked to swipe to choose between the options (Id);
receive a selection of the first query from the query pair via the user interface (Vaish, Page 3646 Introduction “After making a selection, Twitch users can see whether their peers agreed with their selection. In addition, they can see how their contribution is contributing to the larger whole, for example aggregate responses on a map (Figure 2) or in a fact database (Figure 5).”)…
It would have been obvious to one of ordinary skill to which said subject matter pertains at the time the invention was filed to have implemented the crowdsourcing taught by Habernal (Page 1591 Section 3.2) to include not only workers, but to be expanded to include volunteer contributors using the Twitch Crowdsourcing interface taught by Vaish as it yields the predictable results of engaging a wider set of crowdsourcing contributors, which can operate in vary short time periods at low cognitive load (Vaish, Page 3645 Introduction “In contrast, existing mobile crowdsourcing platforms (e.g., [12,16,22]) tend to assume long, focused runs of work. Our design challenge is thus to create crowdsourcing tasks that operate in very short time periods and at low cognitive load.”).
Habernal does not explicitly teach a processor; and a memory storing computer executable instructions.
Liu teaches a processor (Liu, ¶10 “The processor 101 may be a central processing unit (CPU), a semiconductor-based microprocessor, or any other device suitable for retrieval and execution of instructions”); and
a memory (Liu, ¶13 “The machine-readable storage medium 102 may be any suitable machine readable medium, such as an electronic, magnetic, optical, or other physical storage device that stores executable instructions or other data (e.g., a hard disk drive, random access memory, flash memory, etc.)”) storing computer executable instructions, which, when executed, cause the processor to: … (Liu, ¶10 “The processor 101 may be a central processing unit (CPU), a semiconductor-based microprocessor, or any other device suitable for retrieval and execution of instructions”).
It would have been obvious to one of ordinary skill to which said subject matter pertains at the time the invention was filed to have implemented the proposed combination using the underlying hardware taught by Liu as one of ordinary skill in the art would recognize it as being capable of implementing the disclosed proposed device within a computing environment.
With regard to claim 17 the proposed combination further teaches wherein the memory further comprises computer executable instructions, which, when executed, cause the processor to:
store the passage pair to a collection of passage pairs (Habernal, Page 1590 Section 2 “Our newly created corpus of annotated pairs of arguments might resemble recent large-scale corpora for textual inference”; Page 1594 Section 3.4.3 “call this corpus UKPConvArgStrict.”);
generating a directed graph (Habernal, Page 1593 Algorithm 1: “Building DAG from sorted argument pairs.”), wherein the directed graph comprises a first node representing the first passage as node A for argument A (Habernal, Page 1592 Section 3.4.2 “In particular, does is exhibit properties such that if A≥B and B≥C, then A≥C (total ordering)? We can treat arguments as nodes in a graph and argument pairs as graph edges. We will denote such graph as argument graph (and use nodes/arguments and edges/pairs interchangeably in this section).7” Note 7: “Argument pair A> B becomes a directed edge A→B”), a second node representing the second passage as node B for argument B (Id), and a first directed edge from the first node to the second node based on the labeling relating to levels of the convincingness for the first passage and the second passage as the directed edge A→B (Id), a second directed edge from the second node to a third node as B≥C will become B→C (Id) representing a third passage as node C for argument C (Id);
detecting a cyclic relationship when a third directed edge connects from the third node to the first node (Habernal, Page 1593, Section 3.4.2 “We use Johnson's algorithm for finding all elementary cycles in DAG (Johnson, 1975).”; In Algorithm, 1 “if hasCycles(g) then”); and
remove the third node representing the third passage from the directed graph and the collection of passage pairs as a training set of passages (Habernal, Page 1593 Algorithm 1 “report about breaking DAG”; Page 1594 Section 3.4. 3 “We discard the equal argument pairs in advance and filter out argument pairs that break the DAG properties.”).
With regard to claim 20 the proposed combination further teaches wherein the memory further comprises computer executable instructions, which, when executed, cause the processor to:
provide the passage pair to the convincingness ranking model as the BLSTM (Habernal, Page 1590; Section 1 “We propose a novel task of predicting convincingness of arguments in an argument pair, as well as ranking arguments related to a certain topic. Since no data for such a task are available, we create a new annotated corpus. We employ SVM model with rich linguistic features as well as bidirectional Long Short-Term Memory (BLSTM) neural networks because of their excellent performance across various end-to-end NLP tasks (Goodfellow et al., 2016; Piech et al., 2015; Wen et al., 2016; Dyer et al., 2015; Rocktaschel et al., 2016).”), wherein the convincingness ranking model is based on a feed forward neural network as BLSTM is inherently a neural network that has both feed forwarding and back propagation and SVM model is inherently a feed forward ML model (Id; Page 1595 “The core of the model consists of two bi-directional LSTM networks with 64 output neurons each.”) with back propagation as BiLSTM has Back propagation (Id) for updating (Page 1595 “the embedding weights are further updated during training.”) the convincingness ranking model with error correction (Page 1595 “We train the network with ADAM optimizer (Kingma and Ba, 2015) using binary cross-entropy loss function and regularize by early stopping (5 training epochs) and high drop-out rate (0.5) in the dropout layer.”; Page 1596 “we only replace the output layer with a linear activation function and optimize mean absolute error loss”);
determine a convincingness score (Habernal, Page 1593 Section 3.4.2 “We thus compute a weight for each argument pair.”; Page 1596 Section 4.2 “We address this problem as a regression task. We use the UKPConvArgRank data, in which a real value score is assigned to each argument so the arguments can be ranked by their convincingness (for each topic independently). The task is thus to predict a real-value score for each argument from the test topic (remember that we use 32-fold cross validation).”; Please note this claim limitation has been construed as referring to the labels which have been reapplied to the passage pair) based on the passage pair as A1 and A2 (Habernal, Page 1592 Section 3.4.2); and
update the convincingness ranking model based on comparison between the determination of the convincingness score and the label of the passage pair (Page 1595 “the embedding weights are further updated during training.”).
With regard to claim 18, the proposed combination further teaches
disable the selectable as automatic skipping/exiting the task (Vaish, Page 3648 “If users do not wish to answer a question, they may skip Twitch by selecting ‘Exit’ via the options menu. This design decision has been made to encourage the user to give Twitch an answer, which is usually faster than exiting. Future designs could make it easier to skip a task, for example through a swipe-up.”) query pair as the choices (Vaish, Page 3647; “Twitch crowdsourcing interactions are very brief, so it is important that users can complete the tasks extremely quickly. Likewise, crowdsourcing is not the user’s primary task, so these tasks must be lightweight and not distracting. … Each task involves a choice between two to six options through a single motion such as a tap or swipe.”) when a predetermined time period elapses before receiving a selection (Vaish, Page 3646 “The median Census task unlock took 1.6 seconds,”; Page 3647 “Quick completion: contribute within a couple of seconds”).
With regard to claim 19 the proposed combination further teaches provide a user interface as the HIT interface implemented within the twitch crowdsourcing interface (Habernal, Page 1591 “In the HIT, workers were presented with an argument pair, the prompt, and the stance as in Figure 1. They had to choose either "Al is more convincing than A2" (Al> A2), "Al is less convincing than A2" (Al <A2), or "Al and A2 are convincing equally" (Al=A2).”; Vaish, Page 3645 Introduction “To engage a wider set of crowdsourcing contributors, we introduce twitch crowdsourcing: interfaces that encourage contributions of a few seconds at a time. Taking advantage of the common habit of turning to mobile phones in spare moments [24], we replace the mobile phone unlock screen with a brief crowdsourcing task, allowing each user to make small, compounded volunteer contributions over time”) for convincingness annotation (Habernal, Page 1591 “Moreover, they were obliged to write the reason 30-140 characters long. An example of fully annotated argument pair is shown in Figure 2.”; Please note this claim limitation has been construed as an intended use of the claimed user interface), wherein the user interface for the convincingness annotation comprises:
displaying the query as the prompt (Page 1589), wherein the query provides the stance as a stance (Page 1589 Figure 1; Page 1591 “We will use the following terminology. We use topic to refer to a subset of an on-line debate with a given prompt and a certain stance (for example, "Should physical education be mandatory in schools? - yes" is considered as a single topic).”) under the topic as a single topic (Id);
displaying the passage pair as Argument A1 and Argument A2 (Page 1589; Figure 1; Page 1591 “Argument pair is an ordered set of two arguments (Al and A2) belonging to the same topic; see Figure 1.”);
enabling either one of the passage pair being interactively selectable as the workers had to choose via swiping left or right (Habernal, Page 1591 “In the HIT, workers were presented with an argument pair, the prompt, and the stance as in Figure 1. They had to choose either "Al is more convincing than A2" (Al> A2), "Al is less convincing than A2" (Al <A2), or "Al and A2 are convincing equally" (Al=A2).”; Vaish, Page 3645 “Photo Ranking captures users’ opinion between two photographs. In formative work with product designers, we found that they require stock photos for mockups, but stock photo sites have sparse ratings. Likewise, computer vision needs more data to identify high-quality images from the web. Photo Ranking (Figure 1) asks users to swipe to choose the better of two stock photos on a theme, or contribute their own through their cell phone camera.”);
receiving selection of either one of the passage pair as the user making a selection, e.g. choosing A1>A2 (Habernal, Page 1591 “In the HIT, workers were presented with an argument pair, the prompt, and the stance as in Figure 1. They had to choose either "Al is more convincing than A2" (Al> A2), "Al is less convincing than A2" (Al <A2), or "Al and A2 are convincing equally" (Al=A2).”; Vaish, Page 3646 Introduction “After making a selection, Twitch users can see whether their peers agreed with their selection. In addition, they can see how their contribution is contributing to the larger whole, for example aggregate responses on a map (Figure 2) or in a fact database (Figure 5).”); and
performing the labeling as adding the annotations to the pairs of arguments, e.g. the real value scores assigned to each argument so the arguments can be ranked by their convincingness (Habernal, Page 1590 Section 2, “Our newly created corpus of annotated pairs of arguments might resemble recent large-scale corpora for textual inference”) based on the selection of either one of the passage pair (Habernal, Page 1591 Section 3.2 “All 16,927 argument pairs were annotated by five workers each (85k assignments in total). We also allowed workers to express their own standpoint toward the topics.”).
With regard to claim 20 the proposed combination further teaches wherein the memory further comprises computer executable instructions, which, when executed, cause the processor to:
provide the passage pair to the convincingness ranking model as the BLSTM (Habernal, Page 1590; Section 1 “We propose a novel task of predicting convincingness of arguments in an argument pair, as well as ranking arguments related to a certain topic. Since no data for such a task are available, we create a new annotated corpus. We employ SVM model with rich linguistic features as well as bidirectional Long Short-Term Memory (BLSTM) neural networks because of their excellent performance across various end-to-end NLP tasks (Goodfellow et al., 2016; Piech et al., 2015; Wen et al., 2016; Dyer et al., 2015; Rocktaschel et al., 2016).”), wherein the convincingness ranking model is based on a feed forward neural network as BLSTM is inherently a neural network that has both feed forwarding and back propagation and SVM model is inherently a feed forward ML model (Id; Page 1595 “The core of the model consists of two bi-directional LSTM networks with 64 output neurons each.”) with back propagation as BiLSTM has Back propagation (Id) for updating (Page 1595 “the embedding weights are further updated during training.”) the convincingness ranking model with error correction (Page 1595 “We train the network with ADAM optimizer (Kingma and Ba, 2015) using binary cross-entropy loss function and regularize by early stopping (5 training epochs) and high drop-out rate (0.5) in the dropout layer.”; Page 1596 “we only replace the output layer with a linear activation function and optimize mean absolute error loss”);
determine a convincingness score (Habernal, Page 1593 Section 3.4.2 “We thus compute a weight for each argument pair.”; Page 1596 Section 4.2 “We address this problem as a regression task. We use the UKPConvArgRank data, in which a real value score is assigned to each argument so the arguments can be ranked by their convincingness (for each topic independently). The task is thus to predict a real-value score for each argument from the test topic (remember that we use 32-fold cross validation).”; Please note this claim limitation has been construed as referring to the labels which have been reapplied to the passage pair) based on the passage pair as A1 and A2 (Habernal, Page 1592 Section 3.4.2); and
update the convincingness ranking model based on comparison between the determination of the convincingness score and the label of the passage pair (Page 1595 “the embedding weights are further updated during training.”).
Response to Arguments
Applicant's arguments filed October 7, 2025 have been fully considered but they are not persuasive. All the arguments regarding the newly added limitations are addressed in the above rejections.
With regard to claim 1, Applicant argues that Vaish does not teach the claimed stance user interface because Vaish describes a feature that asks users to choose between the better of two stock photos on a theme.
In response, Habernal teaches the functionality of the device, and discloses figures in which the HIT application is demonstrated, which is presented to for workers to make a selection, while implied Habernal does not explicitly teach that this is a user interface which is presented to end users. Vaish teaches an explicit user interface that is provided using twitch crowdsourcing to enable users to select between two options (such as the options that Habernal has the workers choose between). Within the proposed combination the HIT application taught by Habernal has been modified to use the swiping interface taught by Vaish to facilitate expanding the annotations gathered from not only the HIT workers but to include volunteer contributors using the Twitch crowdsourcing interface.
Applicant arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
With regard to claim 11, applicant asserts that the prior art does not read on the claim language. Applicant quotes the claim language and asserts that Habernal does not address it, but provides no reasoning or rational for support. Application does not address the claim mapping on record.
Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references.
One of ordinary skill in the art would recognize the ML system taught by Habernal as being iteratively trained based on the sampled pairs as the crowdsource information is gathered (Habernal, Page 1593 “Building argument graph from crowdsourced argument pairs We build the argument graph iteratively by sampling annotated argument pairs and adding them as graph edges (see Algorithm 1).”) where the weights (e.g. the scores used to determine the convincingness rank) are updated during the training operation (Habernal, Page 1595 Section 4.1 “the embedding weights are further updated during training”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMANDA WILLIS whose telephone number is (571)270-7691. The examiner can normally be reached Monday-Friday 8am-2pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ajay Bhatia can be reached at 571-272-3906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AMANDA L WILLIS/ Primary Examiner, Art Unit 2156