DETAILED ACTION
Notice of AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 10/08/2024, 12/11/2024 and 02/09/2026 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Status of Claims
Claims were amended pursuant to a preliminary amendment filed together with the initial set on 07/16/2024. For the examination purpose, the claim set with amended claims have been used. Claims 3-7, 9-11, 14-18 and 20 were amended. Claims 1-20 are pending of which claim 1 is independent.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an
abstract idea without significantly more.
The Independent claims 1, 18 and 20 recite “receiving content”; “computing a rhetoric vector by analyzing a language structure of the content, the rhetoric vector comprising one or more dimensions each representative of a rhetoric aspect of the language structure”; “and classifying the rhetoric vector with a trained classifier to determine whether a content advisory should be associated with the content”. The limitations above as drafted, is a process that, under its broadest reasonable interpretation, covers a mental process, as this could be performed in the human mind or with the aid of pen and paper.
The limitation of " receiving ... ", "computing ... ", "classifying ... ", as drafted covers mental activities. More specifically, a human can obtain a content, which can be an email or dialogue; can compute, based on the different aspect/features of the language structure of the content, a rhetoric vector, which can be a number representation of the aspects/features of the content; with a trained classifier, which can be a predefined/premade table of rhetoric vector with a score or identifier, can classify the vector and can decide whether there should be a warning/advisory that the content contains high level of rhetoric. The above steps, as drafted, is a process that under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, other than reciting “processor”, “non-transitory computer readable memory”, nothing in the claim element precludes the steps from practically being performed in the human mind. Additionally, the mere nominal recitation of a generic computer appliance does not take the claim limitations out of the mental processes grouping. Thus, the claims recite a mental process.
The claims recite the additional limitation of “trained classifier”, for performing the method, which is recited at a high level of generality and are recited as performing generic computer functions routinely used in computer applications. The current specification in paragraph [0062] specifies “different types of classifiers may be used, such as Naive Bayes, Logistic Regression, Boosted Decision Tree, or Random Forest”, which is generic and not sufficient to amount to significantly more than the judicial exception. Claims 18 and 20 recite the additional limitation of “processor”, “non-transitory computer readable memory”. The current specification in paragraph [0030],[0032],[0053] specifies them as generic and not sufficient to amount to significantly more than the judicial exception. All those are recited at a high level of generality and are recited as performing generic computer functions routinely used in computer applications. This is no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Thus, taken alone, the additional elements do not amount to significantly more than the above identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds
nothing that is not already present when looking at the elements taken individually. There is no indication
that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Claims 1, 18 and 20 are therefore not drawn to eligible subject matter as this are directed to an abstract idea without
significantly more than the abstract idea.
Claim 2 recites the additional limitation of “wherein computing the rhetoric vector comprises computing each dimension, at least partly, using one or more language structure metrics including a distance metric, a proportion metric, and a count metric” , where calculating different metrics for language structure, based on the specification para.[0072]-[0074], could be calculation of different linguistic features, which could be performed in the human mind or with the aid of pen and paper. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception, as claim 2 does not recite any additional limitations. The claim as drafted, is not patent eligible.
Claim 3 recites “wherein computing the rhetoric vector further comprises computing a word count in the content”, computing a word count of content, could be performed in the human mind or with the aid of pen and paper. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception, as claim 3 does not recite any additional limitations. The claim as drafted, is not patent eligible.
Claim 4 recites “further comprising computing a plurality of dimensions of the rhetoric vector”, to compute the dimensions of the rhetoric vector, which from specification, para.[0076], could be different types of linguistic features, which could be performed in the human mind or with the aid of pen and paper. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception, as claim 4 does not recite any additional limitations. The claim as drafted, is not patent eligible.
Claim 5 recites “further comprising preprocessing the content, wherein preprocessing the content comprises one or both of: extracting the content from extraneous content, and generating cleaned text from the content”, where pre-processing the content by taking out unrelated content and generating clean text , could be performed in the human mind or with the aid of pen and paper. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception, as claim 5 does not recite any additional limitations. The claim as drafted, is not patent eligible.
Claim 6 recites “wherein the content is received in a textual format, and the trained classifier is trained for the textual content”, to determine that the content is in textual format and classifier is trained with textual content is an evaluation, observation and could be performed in the human mind or with the aid of pen and paper. The claim recites additional limitation of trained classifier, which is specified in specification in paragraph [0062] as, “different types of classifiers may be used, such as Naive Bayes, Logistic Regression, Boosted Decision Tree, or Random Forest”, which is generic and not sufficient to amount to significantly more than the judicial exception. The claim 6 as drafted, is not patent eligible.
Claim 7 recites “wherein the content is received in an audio or video format, and the method further comprises converting the format to a textual format and analyzing the language structure of the textual format of the content”, where to determine that the content can be either in audio or video format is an observation, evaluation and could be performed in the human mind or with the aid of pen and paper. Converting the audio or video format to a textual format could also be performed in the human mind or with the aid of pen and paper. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception, as claim 7 does not recite any additional limitations. The claim as drafted, is not patent eligible.
Claim 8 recites “wherein the trained classifier is trained for audio or video content”, where determining that the classifier is trained with the audio or video content, could be an evaluation, observation and could be performed in the human mind or with the aid of pen and paper. The claim recites additional limitation of trained classifier, which is specified in specification in paragraph [0062] as, “different types of classifiers may be used, such as Naive Bayes, Logistic Regression, Boosted Decision Tree, or Random Forest”, which is generic and not sufficient to amount to significantly more than the judicial exception. The claim 8 as drafted, is not patent eligible.
Claim 9 recites “further comprising generating a content advisory for the content based on one or more dimensions of the rhetoric vector”, where generating an advisory or warning based on the linguistic features ( dimensions of the rhetoric vector) could be performed with the aid of pen and paper. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception, as claim 9 does not recite any additional limitations. The claim as drafted, is not patent eligible.
Claim 10 recites “further comprising associating a content advisory to the content based on one or more dimensions of the rhetoric vector”, where associating a content advisory or warning to the content based on the linguistic features ( dimensions of the rhetoric vector) could be performed in the human mind or with the aid of pen and paper. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception, as claim 10 does not recite any additional limitations. The claim as drafted, is not patent eligible.
Claim 11 recites “further comprising outputting the content advisory to a content consumer”, to determine that the content advisory is delivered to a consumer as an output, could be an evaluation, observation and could be performed in the human mind or with the aid of pen and paper. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception, as claim 11 does not recite any additional limitations. The claim as drafted, is not patent eligible.
Claim 12 recites “wherein the content advisory is output to the content consumer concurrently with the content or prior to giving access to the content to the content consumer”, to determine that the content advisory is delivered to a consumer as an output and it’s given before the access to the content, could be an evaluation, observation and could be performed in the human mind or with the aid of pen and paper. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception, as claim 12 does not recite any additional limitations. The claim as drafted, is not patent eligible.
Claim 13 recites “wherein the content advisory is output to the content consumer in association with a web search result returning the content”, to determine that the content advisory is delivered to a consumer as an output which is associated with a web search result, could be an evaluation, observation and could be performed in the human mind or with the aid of pen and paper. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception, as claim 13 does not recite any additional limitations. The claim as drafted, is not patent eligible.
Claim 14 recites “further comprising ranking a webpage containing the content using one or both of the rhetoric vector and a classification of the rhetoric vector by the trained classifier”, where a webpage can be ranked by using rhetoric vector and the classification by the trained classified which could be an evaluation, observation and could be performed in the human mind or with the aid of pen and paper. The claim recites additional limitation of trained classifier, which is specified in specification in paragraph [0062] as, “different types of classifiers may be used, such as Naive Bayes, Logistic Regression, Boosted Decision Tree, or Random Forest”, which is generic and not sufficient to amount to significantly more than the judicial exception. The claim 14 as drafted, is not patent eligible.
Claim 15 recites “further comprising adjusting a pay-per-click cost associated with the content based on one or both of the rhetoric vector and a classification of the rhetoric vector by the trained classifier”, where human can adjust manually a payment method for a content based on the classification of the rhetoric vector by the classifier. The claim recites additional limitation of trained classifier, which is specified in specification in paragraph [0062] as, “different types of classifiers may be used, such as Naive Bayes, Logistic Regression, Boosted Decision Tree, or Random Forest”, which is generic and not sufficient to amount to significantly more than the judicial exception. The claim 15 as drafted, is not patent eligible.
Claim 16 recites “wherein the method is performed at a server providing access to the content”, determining that the method is performed at a server could be an observation, evaluation and could be performed in the human mind or with the aid of pen and paper. The claim recites additional limitations of server which is recited as performing generic computer functions, which is not sufficient to amount to significantly more than the judicial exception. The claim 16 as drafted, is not patent eligible.
Claim 17 recites “wherein the method is performed at a user device attempting to access to the content”, to determine that the method is performed at a user device is an evaluation, observation and could be performed in the human mind or with the aid of pen and paper. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception, as claim 17 does not recite any additional limitations. The claim as drafted, is not patent eligible.
Claim 19 recites “further comprising a database for storing the content in association with the content advisory when generated.”, to determine that the content is stored with the advisory in a database is s an evaluation, observation and could be performed in the human mind or with the aid of pen and paper. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception, as claim 19 does not recite any additional limitations. The claim as drafted, is not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1,4, 6, 9-11, 16-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Galitsky et al. ( US 20230376693 A1), hereinafter referenced as Galitsky, in view of Ruben et al. (Identification of Truth and Deception in Text: Application of Vector Space Model to Rhetorical Structure Theory, Proceedings of the EACL 2012 Workshop on Computational Approaches to Deception Detection, April, 2012), hereinafter referenced as Ruben.
Regarding Claim 1, Galitsky teaches a content analysis method, comprising:
receiving content ( Galitsky: Para.[0577],[0579], Figs. 1, 47, a process 4700 which is implemented on rhetoric classification computing device 101 by rhetoric classification application 102, is used to detect the presence of deception or fake content . At block 4701, receives text including sentence fragments, which can include, utterances from the user 160 ) ;
computing a rhetoric [vector] by analyzing a language structure of the content, the rhetoric [vector] comprising one or more dimensions each representative of a rhetoric aspect of the language structure ( Galitsky: Para.[0100],[0580]-[0599], Figs. 1,47, rhetoric classification application 102 can detect a presence of deception in text by analyzes textual content received from a user device, by creating one or more CDTs ( communicative discourse trees) from the textual content. From the CDTs, rhetoric classification application 102 identifies a presence of non-trivial rhetorical relations and/or a presence of nested communicative actions. At block 4702-4707, the rhetoric classification application analyzes the rhetorical structure of a content, by using communicative discourse trees, which has rhetoric relations between the parts of the sentences to represent one or more linguistic features of the text ( one or more dimensions of a rhetoric aspect)) ;
and classifying the rhetoric vector with a trained classifier to determine whether a content advisory should be associated with the content ( Galitsky: Para.[ 0096], [0616], Figs.1, 47, using a trained rhetoric agreement classifier 120, rhetoric classification application 102 determines whether the content/text is above a threshold level of matching. At block 4710, process 4700 involves a response to determining that the complexity score exceeds the threshold, identifying the text as including deceptive content. Hi complexity score can indicate a text that is unnaturally emotionally charged, with an invented overly complex mental state. Such a high complexity value indicates that the text cannot be trusted at all and is associated with deception or lies. The rhetoric classification application 102 can provide a response to a user device by indicating deception or fake content ( warning or advisory)).
Galitsky, while teaching the method of Claim 1, fails to explicitly teach the claimed, computing a rhetoric vector by analyzing a language structure of the content, the rhetoric vector comprising one or more dimensions each representative of a rhetoric aspect of the language structure.
However, Ruben does teach the claimed, computing a rhetoric vector by analyzing a language structure of the content, the rhetoric vector comprising one or more dimensions each representative of a rhetoric aspect of the language structure ( Ruben: Page 98,99, Section: RST-VSM Methodology: a methodology for deception detection research, Rhetorical Structure Theory (RST) analysis with subsequent application of the Vector Space Model (VSM) is described. Using a vector space model, the written stories could be represented as RST vectors in a high dimensional space. According to the VSM, stories are represented as vectors, and the dimension of the vector space equals to the number of RST relations in a set of all written stories under consideration).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Ruben’s teaching of application of Vector Space Model to Rhetorical Structure theory to identify systematic differences between deceptive and truthful stories in terms of their coherence and structure, into the system and method of using communicative discourse trees to identify deception or fake content in text , taught by Galitsky, because, the rigorous and systematic approach of RST-VSM methodology would identify previously unseen deceptive texts more accurately. (Ruben, Page 103,Section: Conclusions).
Claim 18 is a content analysis system claim, comprising: a processor ( Galitsky: Para.[0721], Fig. 51, processor in the processing unit 5104); and a non-transitory computer-readable memory having computer-executable instructions stored thereon, which when executed by the processor ( Galitsky: Para.[0722],[0730], Fig. 51, non-transitory computer-readable storage media in storage media 5122. Processing unit 5104 can execute a variety of programs in response to program code and can maintain multiple concurrently executing programs or processes), configure the content analysis system to perform the steps in method claim 1 above and as such, claim 18 is similar in scope and content to claim 1 and therefore, claim 18 is rejected under similar rationale as presented against claim 1 above.
Claim 20 is a non-transitory computer readable memory claim, having computer-executable instructions stored thereon, which when executed by a processor ( Galitsky: Para.[0722],[0730], Fig. 51, non-transitory computer-readable storage media in storage media 5122. Processing unit 5104 can execute a variety of programs in response to program code and can maintain multiple concurrently executing programs or processes), configure the processor to perform the steps in method claim 1 above and as such, claim 20 is similar in scope and content to claim 1 and therefore, claim 20 is rejected under similar rationale as presented against claim 1 above.
Regarding Claim 4, Galitsky in view of Ruben teach the method of claim 1. Galitsky further teaches, further comprising computing a plurality of dimensions of the rhetoric vector ( Galitsky: Para.[0488], Rhetoric classification application 102 determines, from the communicative discourse tree, a set of defeasible rules by extracting, from the communicative discourse tree, one or more of (i) an elementary discourse unit that is a rhetorical relation type contrast and (ii) a communicative action that is of a class type disagree. The class disagree includes actions such as "deny," "have different opinion, "not believe," "refuse to believe," "contradict," "diverge," "deviate," "counter," "differ," "dissent," "be dissimilar." ( Plurality of dimensions)).
Regarding Claim 6, Galitsky in view of Ruben teach the method of aclaim 1. Galitsky further teaches, wherein the content is received in a textual format, and the trained classifier is trained for the textual content ( Galitsky: Para.[0100], Rhetoric classification application 102 analyzes textual content received from a user device. Para.[0172], rhetoric classification application 102 uses training data 125 to train rhetoric agreement classifier 120. Training data 125 can include a positive training set and a negative training set ( text)).
Regarding Claim 9, Galitsky in view of Ruben teach the method of claim 1. Galitsky further teaches, further comprising generating a content advisory for the content based on one or more dimensions of the rhetoric vector ( Galitsky: Para.[0616], Fig. 47, at 4710 rhetoric classification application 102 can generate an indication of deception or fake content based on the counting of words ( dimension of rhetoric vector). Depending on the genre of the text, e.g., opinionated text, truthful text that is emotionally charged may have a complexity above 10 for 100 words of text. A complexity density above 30 can indicate a text that is unnaturally emotionally charged, with an invented overly complex mental state).
Regarding Claim 10, Galitsky in view of Ruben teach the method of claim 1. Galitsky further teaches, further comprising associating a content advisory to the content based on one or more dimensions of the rhetoric vector ( Galitsky: Para.[0616], whether the content is deceptive or fake is associated with word count ( dimension of rhetoric vector) . Depending on the genre of the text, e.g., opinionated text, truthful text that is emotionally charged may have a complexity above 10 for 100 words of text. A complexity density above 30 can indicate a text that is unnaturally emotionally charged, with an invented overly complex mental state).
Regarding Claim 11, Galitsky in view of Ruben teach the method of claim 9 ( Galitsky: Para.[0616], Fig. 47, at 4710 rhetoric classification application 102 can generate an indication of deception or fake content and send the response to a user device).
Regarding Claim 16, Galitsky in view of Ruben teach the method of claim 1. Galitsky further teaches, wherein the method is performed at a server providing access to the content ( Galitsky: Para.[0677], Fig. 49, server 4912 may be communicatively coupled with remote client computing devices).
Regarding Claim 17, Galitsky in view of Ruben teach the method of claim 1. Galitsky further teaches, wherein the method is performed at a user device attempting to access to the content ( Galitsky: Para.[0093], Figs. 1, 50, Examples of rhetoric classification computing device 101 include client computing devices 5002, 5004, 5006, and 5008 depicted in FIG. 50).
Claims 2, 3, 13, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Galitsky et al. ( US 20230376693 A1), hereinafter referenced as Galitsky, in view of Ruben et al. (Identification of Truth and Deception in Text: Application of Vector Space Model to Rhetorical Structure Theory, Proceedings of the EACL 2012 Workshop on Computational Approaches to Deception Detection, April, 2012), hereinafter referenced as Ruben, further in view of Najork et al. ( US 7962510 B2), hereinafter referenced as Najork.
Regarding Claim 2, Galitsky in view of Ruben teach the method of claim 1. Galitsky further teaches, wherein computing the rhetoric vector comprises computing each dimension, at least partly, using one or more language structure metrics including a distance metric,[ a proportion metric, and a count metric] ( Galitsky: Para.[0171], Fig. 1, Rhetoric classification application 102 can process two pairs at a time, for example <q1, a1> and <q2, a2> and compares q1 with q2 and a1 with a2. Such a comparison allows a determination of whether an unknown question/answer pair contains a correct answer or not by assessing a distance from another question/answer pair with a known label ( distance metric)).
Galitsky in view of Ruben while teaching the method of claim 2, fail to explicitly teach the claimed, wherein computing the rhetoric vector comprises computing each dimension, at least partly, using one or more language structure metrics including [a distance metric,] a proportion metric, and a count metric.
However, Najork does teach the claimed, wherein computing the rhetoric vector comprises computing each dimension, at least partly, using one or more language structure metrics including [a distance metric,] a proportion metric, and a count metric ( Najork: Column 5, lines 40-56, Fig. 3, when the web pages are evaluated using such a metric ( e.g., in step 220), if the number of words of the web page fall above a threshold value ( count metric), the web page can be identified as web spam pending the results of any other evaluations that may be based on additional metrics. Column 6, lines 41-48, Fig. 8, The zipRatio of a page is determined by dividing the size (in bytes) of uncompressed visible text by the size (in bytes) of compressed visible text. When a web page is evaluated using such a potential metric (e.g., in step 220), if the zipRatio ( proportional metric) of the web page lies above a threshold (e.g., 2.0 here), then the web page can be identified as web spam pending the results of any other evaluations that may be based on additional metrics).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Najork’s teaching of using content analysis to detect spam web pages, into the system and method, taught by Galitsky in view of Ruben, because, by using a classifier to combine different metrics to detect spam pages, can improve the prediction accuracy (Najork, Column 4, lines, 11-26).
Regarding Claim 3, Galitsky in view of Ruben teach the method of claim 1
However, Najork does teach the claimed, wherein computing the rhetoric vector further comprises computing a word count in the content ( Najork: Column 5, lines 40-56, Fig. 3, if the number of words of the web page fall above a threshold value, the web page can be identified as web spam pending the results of any other evaluations that may be based on additional metrics).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Najork’s teaching of using content analysis to detect spam web pages, into the system and method, taught by Galitsky in view of Ruben, because, by using a classifier to combine different metrics to detect spam pages, can improve the prediction accuracy (Najork, Column 4, lines, 11-26).
Regarding Claim 13, Galitsky in view of Ruben teach the method of claim 11. Galitsky in view of Ruben fail to explicitly teach the claimed, wherein the content advisory is output to the content consumer in association with a web search result returning the content.
However, Najork does teach the claimed, wherein the content advisory is output to the content consumer in association with a web search result returning the content ( Najork: Column 4, lines 51-67,Fig. 2A, a search engine receives contents by crawling a set of web pages at step 210. The crawled web pages are evaluated (via a processor, also referred to as a classifier) using one or more metrics at step 220. The result(s) of the metrics are compared against one or more thresholds to determine whether web spam is present (or likely present) at step 230 ).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Najork’s teaching of using content analysis to detect spam web pages, into the system and method, taught by Galitsky in view of Ruben, because, by using a classifier to combine different metrics to detect spam pages, can improve the prediction accuracy (Najork, Column 4, lines, 11-26).
Regarding Claim 14, Galitsky in view of Ruben teach the method of claim 1. Galitsky in view of Ruben fail to explicitly teach the claimed, further comprising ranking a webpage containing the content using one or both of the rhetoric vector and a classification of the rhetoric vector by the trained classifier.
However, Najork does teach the claimed, further comprising ranking a webpage containing the content using one or both of the rhetoric vector and a classification of the rhetoric vector by the trained classifier ( Najork: Column 4, lines 36-50,Fig. 1 illustrates an example spam web page. Spam web page 10 includes keywords, search terms, and links ( classification), each of which can be generated by an SEO ( search engine optimizer) to enhance the ranking of a web site in a search results list from a search engine or the like ).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Najork’s teaching of using content analysis to detect spam web pages, into the system and method, taught by Galitsky in view of Ruben, because, by using a classifier to combine different metrics to detect spam pages, can improve the prediction accuracy (Najork, Column 4, lines, 11-26).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Galitsky et al. ( US 20230376693 A1), hereinafter referenced as Galitsky, in view of Ruben et al. (Identification of Truth and Deception in Text: Application of Vector Space Model to Rhetorical Structure Theory, Proceedings of the EACL 2012 Workshop on Computational Approaches to Deception Detection, April, 2012), hereinafter referenced as Ruben, further in view of Chandramouli et al. ( US 20150254566 A1), hereinafter referenced as Chandramouli.
Regarding Claim 5, Galitsky in view of Ruben teach the method of claim 1. Galitsky in view of Ruben fail to explicitly teach the claimed, further comprising preprocessing the content, wherein preprocessing the content comprises one or both of: extracting the content from extraneous content, and generating cleaned text from the content.
However, Chandramouli does teach the claimed, further comprising preprocessing the content, wherein preprocessing the content comprises one or both of: extracting the content from extraneous content, and generating cleaned text from the content ( Chandramouli: Para.[0238]-[0244], the preprocessing steps were implemented for all the data sets are tokenization, stemming, pruning and no punctuation (NOP), tab line and paragraph indicators, to have a clean dataset for the detection accuracy).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Chandramouli’s teaching of an automated detection of deception in short and multilingual electronic messages, into the system and method, taught by Galitsky in view of Ruben, because, this would improve the detection of deception in electronic text communications with accuracy. (Chandramouli, Para.[0004]-[0007]).
Claims 7, 8 are rejected under 35 U.S.C. 103 as being unpatentable over Galitsky et al. ( US 20230376693 A1), hereinafter referenced as Galitsky, in view of Ruben et al. (Identification of Truth and Deception in Text: Application of Vector Space Model to Rhetorical Structure Theory, Proceedings of the EACL 2012 Workshop on Computational Approaches to Deception Detection, April, 2012), hereinafter referenced as Ruben, further in view of Li et al. ( US 11282509 B1), hereinafter referenced as Li.
Regarding Claim 7, Galitsky in view of Ruben teach the method of claim 1. Galitsky in view of Ruben fail to explicitly teach the claimed, wherein the content is received in an audio or video format, and the method further comprises converting the format to a textual format and analyzing the language structure of the textual format of the content .
However, Li does teach the claimed, wherein the content is received in an audio or video format, and the method further comprises converting the format to a textual format and analyzing the language structure of the textual format of the content ( Li: Column 20, lines 44-66, column 21, lines 1-9, Fig. 2, transcription component 218 generate text representation of speech utterance. The transcription 226 may be a textual representation of speech utterances in the audio signal included in the item of content 202. Transcription 226 is analyzed by text classifier 228 for the semantic meanings of the utterance).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Li’s teaching of classifiers for media content, into the system and method, taught by Galitsky in view of Ruben, because, by detecting and classifying different content types using the described techniques may improve content recommendations and/or search results by identifying, from what is included in the video or audio itself, what the content is "about," along with other content having similar characteristics. (Li, Column 2).
Regarding Claim 8, Galitsky in view of Ruben, further in view of Li teach the method of claim 7. Li further teaches, wherein the trained classifier is trained for audio or video content ( Li: Column 9, lines 6-17, Fig. 1, The audio classifier 112 may be a machine-learned model trained to detect and/or classify events in an audio signal included in the content received from the computing device 104(1)).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Li’s teaching of classifiers for media content, into the system and method, taught by Galitsky in view of Ruben, because, by detecting and classifying different content types using the described techniques may improve content recommendations and/or search results by identifying, from what is included in the video or audio itself, what the content is "about," along with other content having similar characteristics. (Li, Column 2).
Claims 12,19 are rejected under 35 U.S.C. 103 as being unpatentable over Galitsky et al. ( US 20230376693 A1), hereinafter referenced as Galitsky, in view of Ruben et al. (Identification of Truth and Deception in Text: Application of Vector Space Model to Rhetorical Structure Theory, Proceedings of the EACL 2012 Workshop on Computational Approaches to Deception Detection, April, 2012), hereinafter referenced as Ruben, further in view of Brestoff et al. ( US 9552548 B1), hereinafter referenced as Brestoff.
Regarding Claim 12, Galitsky in view of Ruben teach the method of claim 11. Galitsky in view of Ruben fail to explicitly teach the claimed, wherein the content advisory is output to the content consumer concurrently with the content or prior to giving access to the content to the content consumer.
However, Brestoff does teach the claimed, wherein the content advisory is output to the content consumer concurrently with the content or prior to giving access to the content to the content consumer ( Brestoff: Column 11, lines 53-67, the trained deep learning system and the algorithm itself would scan internal emails and would run in the background. The system would output an alert to a privileged list of in-house personnel, likely in-house counsel or employees and they would be able to see a spreadsheet with a score in column A, the related text in column B, along with a bar chart for context, and then would be enabled to call forward the Emails of interest with early warning).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Brestoff’s teaching of identifying risk and providing early warning using classified text and deep learning algorithm, into the system and method, taught by Galitsky in view of Ruben, because, this would detect the risk of litigation and would provide early warning to the appropriate personnel. (Brestoff, Column 2).
Regarding Claim 19, Galitsky in view of Ruben teach the content analysis system of claim 18. Galitsky in view of Ruben fail to explicitly teach the claimed, further comprising a database for storing the content in association with the content advisory when generated.
However, Brestoff does teach the claimed, further comprising a database for storing the content in association with the content advisory when generated ( Brestoff: Column 7, lines 10-13, Fig.1, at step 112, a determination is made that an identified email is a true positive ( contain early warning topic), a copy of that Email may be placed in a True Positive database at step 114).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Brestoff’s teaching of identifying risk and providing early warning using classified text and deep learning algorithm, into the system and method, taught by Galitsky in view of Ruben, because, this would detect the risk of litigation and would provide early warning to the appropriate personnel. (Brestoff, Column 2).
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Galitsky et al. ( US 20230376693 A1), hereinafter referenced as Galitsky, in view of Ruben et al. (Identification of Truth and Deception in Text: Application of Vector Space Model to Rhetorical Structure Theory, Proceedings of the EACL 2012 Workshop on Computational Approaches to Deception Detection, April, 2012), hereinafter referenced as Ruben, further in view of Chino et al. ( US 20160371361 A1), hereinafter referenced as Chino.
Regarding Claim 15, Galitsky in view of Ruben teach the method of claim 1 [[to 13]]. Galitsky in view of Ruben fail to explicitly teach the claimed, further comprising adjusting a pay-per-click cost associated with the content based on one or both of the rhetoric vector and a classification of the rhetoric vector by the trained classifier.
However, Chino does teach the claimed, further comprising adjusting a pay-per-click cost associated with the content based on one or both of the rhetoric vector and a classification of the rhetoric vector by the trained classifier ( Chino: Para.[0072], A click will not be counted as a financial event by the system unless it is preceded or accompanied by another such event that informs the system that a human being is conducting the click ( adjusting). This solves the long-standing problem of click fraud within the Internet advertising industry by changing the payment trigger from a cost per click metric to a cost per engagement metric).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Chino’s teaching of method and system which includes a user interface module and accessible by a plurality of user computers operated by a plurality of users over the network and is operative to respond to user requests for web pages or other selections of content, where user inputs of one or more collections can be obtained, into the system and method, taught by Galitsky in view of Ruben, because, this would optimize monetization for users who create collections, as well as enabling such users to submit requests via the browser or application to a server to select the advertising location and market participants within their collections. (Chino, Para.[0002]).
Conclusion
Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant's disclosure.
Prabhat et al. (US 11651016 B2) teaches systems, method, and computer-readable mediums for automated text classification, and particularly a mechanism for performing binary classification using only a set of positive labeled data as training data and having a large set of unlabeled data, where the algorithm can function without any information regarding the negative class. The disclosed classification systems and methods may use a text classification process which automatically classifies text based on the current positive training data available, but identifies additional words which can be added to the positive training data such that future iterations of the text classification can better identify the positive class of text.
Galitsky et al. (US 10853581 B2) teaches systems, devices, and methods to calculate a rhetorical relationship between one or more sentences. In an example, a computer-implemented method accesses a sentence comprising a plurality of fragments. At least one fragment includes a verb and a words. Each word includes a role of the words within the fragment. Each fragment is an elementary discourse unit. The method generates a discourse tree that represents rhetorical relationships between the sentence fragments. The discourse tree includes nodes including nonterminal and terminal nodes, each nonterminal node representing a rhetorical relationship between two of the sentence fragments, and each terminal node of the nodes of the discourse tree is associated with one of the sentence fragments. The method matches each fragment that has a verb to a verb signature, thereby creating communicative discourse tree.
Howald et al. (US 9355372 B2) teaches a method and system directed to predicting implicit rhetorical relations between two spans of text, e.g., in a large annotated corpus, such as the Penn Discourse Treebank (“PDTB”), Rhetorical Structure Theory corpus, and the Discourse Graph Bank, and particularly directed to determining a rhetorical relation in the absence of an explicit discourse marker. Surface level features may be used to capture pragmatic information encoded in the absent marker. In one manner a simplified feature set based only on raw text and semantic dependencies is used to improve performance for all relations. By using surface level features to predict implicit rhetorical relations for the large annotated corpus the invention approaches a theoretical maximum performance, suggesting that more data will not necessarily improve performance based on these and similarly situated features.
Cobb et al. (US 7313562 B2 ) teaches a method of content management. The method includes receiving a user input entered in a plurality of grammatical structured text entry elements associated with a content subject, each of the plurality of grammatical structured text entry elements having a rhetorical structure to facilitate selective assembly into at least one sentence, storing the plurality of grammatical structured text entry elements in a data record associated with the content subject, converting at least a portion of the data record into a structured format file supporting rhetorical elements, and rendering an electronically displayable document using the structured format file. The electronically displayable document includes the at least one grammatical structured text entry element integrated into at least one sentence. The structured format file includes at least one grammatical structured text entry element of the plurality of grammatical structured text entry elements.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NADIRA SULTANA whose telephone number is (571)272-4048. The examiner can normally be reached M-F,7:30 am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D. Shah can be reached on (571) 270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NADIRA SULTANA/Examiner, Art Unit 2653