DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 25 recite a “computer storage media”, which typically covers both transitory and non-transitory medium. Transitory medium including carrier waves or communication media are viewed as physical characteristics of a form or energy, such a frequency, voltage, or the strength of a magnetic field, defined energy or magnetism, per se, and as such are non-statutory natural phenomena. O’Reilly, 56 U.S. (15 How.) at 112-14. Moreover, it does not appear that the claims reciting the signal are encoded with functional descriptive material that fall within any of the categories of the patentable subject matter set forth in § 101.
Claims 1-25 are rejected under 35 U.S.C. 101.
Claim 1 and 14-25 are directed to the abstract idea of explaining a recommendation using information analysis and natural language generation, which falls within judicially recognized abstract ideas such as collecting information, analyzing it, and presenting the results (e.g., mental processes and methods of organizing human activity). The claim merely obtains contextual information about a recommended item and the recommendation context, combines that information, and generates an explanation describing why the recommendation was made. Explaining recommendations to users is a fundamentally informational and cognitive task that mirrors how a human would justify a recommendation using contextual reasoning, even if automated.
The claim does not integrate this abstract idea into a practical application or technical improvement. The use of a language model neural network is recited at a high level of generality and is used only as a generic tool to perform the abstract task of explanation generation. The claim does not improve the functioning of the computer itself, improve a specific machine learning architecture, reduce computational cost, or solve a technological problem in computer systems.
The claim elements amount to no more than well-understood, routine, and conventional computer activities: obtaining text data, forming an input sequence, processing that input with a known machine learning model, and outputting explanatory text. The claim does not recite any specialized data structures, novel training techniques, model architectures, or constrained deployment that would transform the abstract idea into patent-eligible subject matter.
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims are (i) mere instructions to implement the idea on a computer, and/or (ii) recitation of generic computer structure that serves to perform generic computer functions that are well-understood, routine, and conventional activities previously known to the pertinent industry. Viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. There is further no improvement to the computing device.
Dependent claims 2-23 are further recite an abstract idea performable by a human and do not amount to significantly more than the abstract idea as they do not provide steps other than what is conventionally known in user/customer review system.
Claim 2: merely presents generated information to a user, which is abstract data presentation.
Claim 3: adds receiving a user request and responding with information, which is routine interaction and abstract information exchange.
Claim 4: identifies topics and provides links to related content, which is organizing and presenting information.
Claim 5: defines the structure of input information without improving computer functionality.
Claim 6: uses example input-output pairs as guidance, which is abstract instructional information.
Claim 7: recites generic machine learning training concepts without a technical improvement.
Claim 8: uses a chain-of-thought prompt, which is an abstract reasoning aid.
Claim 9: adds user interest information to content analysis, which is abstract personalization.
Claim 10: generates descriptive text from metadata, which is abstract information summarization.
Claim 11: applies the abstract process to video metadata, a field-of-use limitation.
Claim 12: applies the abstract process to news article metadata, a field-of-use limitation.
Claim 13: applies the abstract process to product metadata, a field-of-use limitation.
Claim 14: applies the abstract process to electronic book metadata, a field-of-use limitation.
Claim 15: applies the abstract process to music metadata, a field-of-use limitation.
Claim 16: applies the abstract process to software application metadata, a field-of-use limitation.
Claim 17: limits description generation to summaries, which is abstract information condensation.
Claim 18: summarizes contextual information, which is abstract data analysis and presentation.
Claim 19: summarizes conversational turns, which is abstract conversational analysis.
Claim 20: summarizes search query text, which is abstract information processing.
Claim 21: uses prior search queries as context, which is abstract historical data analysis.
Claim 22: uses prior recommended items as context, which is abstract recommendation history analysis.
Claim 23: uses user interaction data as context, which is abstract behavioral data analysis.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 5-6, 9 and 24-25 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ni et al. (“Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects”; Nov. 3-7, 2019).
Claims 1 and 24-25,
Ni teaches a method performed by one or more computers, the method comprising ([Abstract] two personalized generation models a reference based Seq2Seq model and an aspect-conditional masked language model (implemented to generate explanations));
obtaining item context text characterizing a particular content item that was recommended to a user by a content recommendation system in a content recommendation context ([3.1] an item justification reference for each item: for each user u (or item i), we build a justification reference D consisting of justifications about the item on the training set);
obtaining recommendation context text characterizing the content recommendation context ([3.1] obtain a user persona (recommendation context) representing the user’s interests (we also obtain a user persona (or item profile) A based on the fine-grained aspects that the user’s (or item’s) pervious justifications have covered (user interest));
generating, from the recommendation context text and the item context text, an input sequence to a language model neural network ([3.2] Seq2Seq model used two sequence encoders to process the user and item justification references (our framework, called ‘Ref2Seq’, views the historical justifications of users and items as references; it includes two sequence encoders that learn user and item latent representations by taking previous justifications as references); the inputs (user/item justification references) pass through an embedding layer and GRU to produce an encoded sequence, forming the input sequence to the neural network); and
processing the input sequence using the language model neural network to generate, as output, explanation text that represents an explanation of why the particular content item was recommended to the user in the content recommendation context ([3.2-3.3] the Seq2Seq model’s decoder uses the encoded user and item references to generate personalized justifications; they define the goal as predicting justification Ju,i that would explain why item I fits user u’s interests; the decoder (a two-layer GRU) take the input sequence and generates the target words).
Claim 2,
Ni further teaches the method of claim 1, further comprising: providing the explanation text for presentation to the user ([Introduction] [Table 1] the purpose of generating justifications is to explain recommendations to users; explaining or justifying recommendations; to present diverse justifications for different users based on their personal interests).
Claim 5,
Ni further teaches the method of claim 1, wherein the input sequence comprises the recommendation context text, the item context text, and a prompt sequence ([3.1] obtain user persona (recommendation/user context) and use it as conditioning information; in the ACMLM, encodes aspects derived from the user persona into the model; obtain item-side textual context; suppling a template sequence (prompt staring sequence) to the LM decoder; generation that begins from a template sequence X0; ACMLM combines user persona/item profile aspects in an encoder and a decoder input sequence that is masked template).
Claim 6,
Ni further teaches the method of claim 5, wherein the prompt sequence includes one or more example input - output pairs that each include (i) respective example recommendation context text and item context text and (ii) respective example explanation text ([3.3] [Table 4] template sequence X0 as (universe, [MASK] ##ble) with length T. At each iteration i, a position ti is sampled uniformly at random from 1 T and the token at ti (i.e. xi ti ) of the current sequence Xi is replaced by [MASK]).
Claim 9,
Ni further teaches the method of claim 1, further comprising: obtaining user text that represents topics of interest to the user ([3.1] user topic of interest as fine-grained aspects (i.e. topical properties) and build a user persona from them); and
wherein generating, from the recommendation context and the item context text, an input sequence to a language model neural network comprises: generating the input sequence from the recommendation context, the item context text, and the user text ([3.1-3.3] i) using recommendation context via user persona/aspects ii) items context via item profile/aspects and iii) feeding these into the model input/conditioning).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3-4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ni et al. (“Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects”; Nov. 3-7, 2019) and further in view of Billsus et al. (US 7,757,170).
Claim 3,
Ni teaches all the limitations in claim 2. The difference between the prior art and the claimed invention is that Ni does not explicitly teach receiving, from the user, a request for an explanation for the particular content item; and wherein providing the explanation text for presentation to the user comprises providing the explanation text in response to the request.
Billsus teaches receiving, from the user, a request for an explanation for the particular content item ([col. 6 lines 14-27] clicking on Amazon’s “why was this recommended” links); and
wherein providing the explanation text for presentation to the user comprises providing the explanation text in response to the request ([col. 6 lines 14-27] clicking link reveals (explanation provided in response to user click/quest)).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Ni with teachings of Billsus by modifying the justifying recommendations using distantly labeled reviews and fine grained aspects as taught by Ni to include receiving, from the user, a request for an explanation for the particular content item; and wherein providing the explanation text for presentation to the user comprises providing the explanation text in response to the request as taught by Billsus for the benefit of processing the plurality of recommendations stored in the log (Billsus [Abstract]).
Claim 4,
Ni further teaches the method of claim 2, wherein the explanation text identifies one or more topics that are relevant to the particular content item ([Introduction] we extract fine-grained aspects from justifications and build user personas and items profiles consisting of set of representative aspects).
The difference between the prior art and the claimed invention is that Ni does not explicitly teach wherein providing the explanation text for presentation to the user comprises providing, for presentation to the user, a respective link for each of the one or more topics that, when selected by the user, causes additional content items that are relevant to the topic to be presented to the user.
Billsus teaches wherein providing the explanation text for presentation to the user comprises providing, for presentation to the user, a respective link for each of the one or more topics that, when selected by the user, causes additional content items that are relevant to the topic to be presented to the user (providing a selectable topic/category control that when selected, yields topic-specific recommendations (the user can also select a specific category such as non-fiction or romance from a drop-down menu 202 to request category-specific recommendations).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Ni with teachings of Billsus by modifying the justifying recommendations using distantly labeled reviews and fine grained aspects as taught by Ni to include wherein providing the explanation text for presentation to the user comprises providing, for presentation to the user, a respective link for each of the one or more topics that, when selected by the user, causes additional content items that are relevant to the topic to be presented to the user as taught by Billsus for the benefit of processing the plurality of recommendations stored in the log (Billsus [Abstract]).
Claim 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ni et al. (“Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects”; Nov. 3-7, 2019) and further in view of Lester et al. (“The Power of Scale for Parameter-Efficient Prompt Tuning”; Sept. 2, 2021).
Claim 7,
Ni further teaches the method of claim 5, wherein: the language model neural network is configured to map each token in the input sequence to a respective embedding ([3.2] embedding layer operating on the input justification references these justifications pass a word embedding layer); shares the same WordPiece embeddings as BERT),
The difference between the prior art and he claimed invention is that Ni does not explicitly teach the prompt sequence includes one or more tunable tokens, and the respective embeddings for each of the one or more tunable tokens have been learned through prompt tuning on a set of training examples after the language model neural network has been pre-trained, and wherein each training example includes (i) a respective training input sequence and (ii) respective training explanation text.
Lester teaches the prompt sequence includes one or more tunable tokens, and the respective embeddings for each of the one or more tunable tokens have been learned through prompt tuning on a set of training examples after the language model neural network has been pre-trained ([Abstract] [Introduction] prompt tuning mechanism for learning soft prompts to condition frozen pretrained language models.. soft prompts are learned through backpropagation and can be tuned to incorporate signals from labeled examples; the soft prompt is trained end-to-end and can condense the signal from a full labeled dataset while the model is frozen), and
wherein each training example includes (i) a respective training input sequence and (ii) respective training explanation text ([] the training targets (explanation/justification text) and teaches training against ground truth 1) the target explanation is justification Ju,i that would explain why item i fits user u’s interests 2) the Seq2Seq training uses a cross-entropy loss compared against the ground-truth sequence and 3) the inputs are user (or item) reference D conditional justifications; tuning using labeled examples (dataset) to train the soft prompt while keeping the LM frozen).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Ni with teachings of Lester by modifying the justifying recommendations using distantly labeled revies and fine-grained aspects as taught by Ni to include the prompt sequence includes one or more tunable tokens, and the respective embeddings for each of the one or more tunable tokens have been learned through prompt tuning on a set of training examples after the language model neural network has been pre-trained, and wherein each training example includes (i) a respective training input sequence and (ii) respective training explanation text as taught by Lester for the benefits of incorporating robustness to domain transfer and enables efficient prompt ensembling (Lester [Abstract]).
Claim 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ni et al. (“Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects”; Nov. 3-7, 2019) and further in view of Wei et al. (“Chain-of-Thought Prompting Elicits Reasoning in Large Language Models”; NeurIPS 2022).
Claim 8,
Ni teaches all the limitations in claim 5. The difference between the prior art and the claimed invention is that Ni does not explicitly teach wherein the prompt sequence comprises a chain-of-thought prompt.
Wei teaches wherein the prompt sequence comprises a chain-of-thought prompt ([Abstract] generating a chain of thought—a series of intermediate reasoning steps—significantly improves the ability of large language models to perform complex reasoning).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Ni with teachings of Lester by modifying the justifying recommendations using distantly labeled revies and fine-grained aspects as taught by Ni to include wherein the prompt sequence comprises a chain-of-thought prompt as taught by Wei for the benefits of surpassing even finetuned GPT-3 with a verifier (Wei [Abstract]).
Claims 10, 17-18 and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ni et al. (“Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects”; Nov. 3-7, 2019) and further in view of Lebret et al. (“Neural Text Generation from Structured Data with Application to the Biography Domain”; Sep. 23, 2016).
Claim 10,
Ni teaches all the limitations in claim 1. The difference between the prior art and the claimed invention is that Ni does not explicitly teach wherein obtaining item context text characterizing a particular content item that was recommended to a user in a content recommendation context comprises: obtaining metadata for the particular content item; and generating a natural language text description of the metadata.
Lebret teaches wherein obtaining item context text characterizing a particular content item that was recommended to a user in a content recommendation context comprises: obtaining metadata for the particular content item; and generating a natural language text description of the metadata ([Abstract] [Introduction] structured metadata (it generates biographical sentences from fact tables); concept-to-text generation renders structured records into natural language).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Ni with teachings of Lebret by modifying the justifying recommendations using distantly labeled revies and fine-grained aspects as taught by Ni to include wherein obtaining item context text characterizing a particular content item that was recommended to a user in a content recommendation context comprises: obtaining metadata for the particular content item; and generating a natural language text description of the metadata as taught by Lebret for the benefits of allowing the model to embed words differently depending on the data fields in which they occur (Lebret [Abstract]).
Claim 17,
Lebret further teaches the method of claim 10, wherein generating a natural language text description of the metadata comprises generating a natural language summary of the metadata ([Introduction] generating the natural language text description; concept-to-text generation renders structured records into natural language (generating a natural-language description/sentence from records, functioning as a summary of the records)).
Claim 18,
Ni teaches wherein obtaining recommendation context text characterizing the content recommendation context comprises: obtaining text that provides context for the content recommendation context ([3.1] obtain context-provided text from the user (for each user u (or item i), we build a justification reference D consisting of justifications that the user has written).
The difference between the prior art and the claimed invention is that Ni does not explicitly teach generating a natural language text summary of the text that provides context for the content recommendation context ([Introduction] rendering structured records into natural language as a summary/description).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Ni with teachings of Lebret by modifying the justifying recommendations using distantly labeled revies and fine-grained aspects as taught by Ni to include generating a natural language text summary of the text that provides context for the content recommendation context as taught by Lebret for the benefits of allowing the model to embed words differently depending on the data fields in which they occur (Lebret [Abstract]).
Claim 22,
Ni further teaches the method of claim 18, wherein the text that provides context for the content recommendation context comprises data describing one or more previous content items that have been previously recommended to the user by the content recommendation system ([3.1-3.2] user/item history implies prior items; item-specific texts used as context: builds for an each user, a justification reference from historical user text about items: we build a justification reference D consisting of justifications that the user has written (or justifications about the items)).
Claim 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ni et al. (“Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects”; Nov. 3-7, 2019) in view of Lebret et al. (“Neural Text Generation from Structured Data with Application to the Biography Domain”; Sep. 23, 2016) and further in view of Datar et al. (US 2008/0154908).
Claim 11,
Ni and Lebret teach all the limitations in claim 10. The difference between the prior art and the claimed invention is that Ni nor Lebret explicitly teach wherein the particular content item is a video and the metadata comprises one or more of: a video title; a video description; text captions for audio from the video; salient terms for the video; relevant entities for the video; or topic tags that identify relevant topics for the video.
Datar teaches wherein the particular content item is a video and the metadata comprises one or more of: a video title; a video description; text captions for audio from the video; salient terms for the video; relevant entities for the video; or topic tags that identify relevant topics for the video ([0002] annotations provide a mechanism for supplementing video with useful information; annotations can contain, for example, metadata describing the content of the video, subtitles, or additional audio tracks).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Ni and Lebret with teachings of Datar by modifying the justifying recommendations using distantly labeled revies and fine-grained aspects as taught by Ni to include wherein the particular content item is a video and the metadata comprises one or more of: a video title; a video description; text captions for audio from the video; salient terms for the video; relevant entities for the video; or topic tags that identify relevant topics for the video as taught by Datar for the benefits of making the content more meaningful (Datar [0002]).
Claim 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ni et al. (“Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects”; Nov. 3-7, 2019) in view of Lebret et al. (“Neural Text Generation from Structured Data with Application to the Biography Domain”; Sep. 23, 2016) and further in view of Becker et al. (US 2013/0219262).
Claim 12,
Ni and Lebret teach all the limitations in claim 10. The difference between the prior art and the claimed invention is that Ni nor Lebret explicitly teach wherein the particular content item is a news article and the metadata comprises one or more of: a headline of the article; text of the article; a publisher of the article; or an author of the article.
Becker teaches wherein the particular content item is a news article and the metadata comprises one or more of: a headline of the article; text of the article; a publisher of the article; or an author of the article ([0032] metadata for each document including the title, authors, publisher etc.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Ni and Lebret with teachings of Becker by modifying the justifying recommendations using distantly labeled revies and fine-grained aspects as taught by Ni to include wherein the particular content item is a news article and the metadata comprises one or more of: a headline of the article; text of the article; a publisher of the article; or an author of the article as taught by Becker for the benefits of viewing device displays a full text version of the selected article (Becker [Abstract]).
Claim 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ni et al. (“Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects”; Nov. 3-7, 2019) in view of Lebret et al. (“Neural Text Generation from Structured Data with Application to the Biography Domain”; Sep. 23, 2016) and further in view of Bhagat (US 9,684,653).
Claim 13,
Ni and Lebret teach all the limitations in claim 10. The difference between the prior art and the claimed invention is that Ni nor Lebret explicitly teach wherein the particular content item is a web page describing a product and the metadata comprises one or more of: a product name; a product description; a price of the product; or specifications of the product.
Bhagat teaches wherein the particular content item is a web page describing a product and the metadata comprises one or more of: a product name; a product description; a price of the product; or specifications of the product ([col. 2 line 56 to col. 3 line 4] product attributes a title or description, technical specification, the purchase price).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Ni and Lebret with teachings of Bhagat by modifying the justifying recommendations using distantly labeled revies and fine-grained aspects as taught by Ni to include wherein the particular content item is a web page describing a product and the metadata comprises one or more of: a product name; a product description; a price of the product; or specifications of the product as taught by Bhagat for the benefits of verifying the translation of the resources and to perform other functionality (Bhagat [Abstract]).
Claims 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ni et al. (“Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects”; Nov. 3-7, 2019) in view of Lebret et al. (“Neural Text Generation from Structured Data with Application to the Biography Domain”; Sep. 23, 2016) and further in view of Mishra et al. (US 9,826,285).
Claim 14,
Ni and Lebret teach all the limitations in claim 10. The difference between the prior art and the claimed invention is that Ni nor Lebret explicitly teach wherein the particular content item is an electronic book and the metadata comprises one or more of: a title of the book; text from the book; a summary of the book; relevant topics for the book; a publisher of the book; or an author of the book.
Mishra teaches wherein the particular content item is an electronic book and the metadata comprises one or more of: a title of the book; text from the book; a summary of the book; relevant topics for the book; a publisher of the book; or an author of the book ([col. 1 line 45-57] summaries for media content such as electronic books and dynamically generated summaries or recaps for such media).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Ni and Lebret with teachings of Bhagat by modifying the justifying recommendations using distantly labeled revies and fine-grained aspects as taught by Ni to include wherein the particular content item is an electronic book and the metadata comprises one or more of: a title of the book; text from the book; a summary of the book; relevant topics for the book; a publisher of the book; or an author of the book as taught by Mishra for the benefits of summarizing relevant events in previous episodes or seasons (Mishra [Background]).
Claim 15,
Ni and Lebret teach all the limitations in claim 10. The difference between the prior art and the claimed invention is that Ni nor Lebret explicitly teach wherein the particular content item is a music content item and the metadata comprises one or more of: a title of the music content item; lyrics from the music content item; a genre of the music content item; a description of the music content item; or an artist relevant to the music content item.
Mishra teaches wherein the particular content item is a music content item and the metadata comprises one or more of: a title of the music content item; lyrics from the music content item; a genre of the music content item; a description of the music content item; or an artist relevant to the music content item.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Ni and Lebret with teachings of Bhagat by modifying the justifying recommendations using distantly labeled revies and fine-grained aspects as taught by Ni to include wherein the particular content item is a music content item and the metadata comprises one or more of: a title of the music content item; lyrics from the music content item; a genre of the music content item; a description of the music content item; or an artist relevant to the music content item as taught by Mishra for the benefits of summarizing relevant events in previous episodes or seasons (Mishra [Background]).
Claim 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ni et al. (“Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects”; Nov. 3-7, 2019) in view of Lebret et al. (“Neural Text Generation from Structured Data with Application to the Biography Domain”; Sep. 23, 2016) and further in view of McIntosh et al. (US 2016/0328402).
Claim 16,
Ni and Lebret teach all the limitations in claim 10. The difference between the prior art and the claimed invention is that Ni nor Lebret explicitly teach wherein the particular content item is a software application and the metadata comprises one or more of: a type of the software application; a description of the software application; or a publisher of the software application.
McIntosh teaches wherein the particular content item is a software application and the metadata comprises one or more of: a type of the software application; a description of the software application; or a publisher of the software application ([0014] examples of metadata include the title of the application, the description of the application, information about the publisher of the application).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Ni and Lebret with teachings of McIntosh by modifying the justifying recommendations using distantly labeled revies and fine-grained aspects as taught by Ni to include wherein the particular content item is a software application and the metadata comprises one or more of: a type of the software application; a description of the software application; or a publisher of the software application as taught by Mishra for the benefits of allowing the user to view the keyword report and perform further analysis on keyword usage for their mobile application or the subject mobile application (McIntosh [0004]).
Claim 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ni et al. (“Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects”; Nov. 3-7, 2019) in view of Lebret et al. (“Neural Text Generation from Structured Data with Application to the Biography Domain”; Sep. 23, 2016) and further in view of Gliwa et al. (“SAMSum Corpus…”; Nov. 29, 2019).
Claim 19,
Ni and Lebret teach all the limitations in claim 18. The difference between the prior art and the claimed invention is that Ni nor Lebret explicitly teach wherein the content recommendation context is a conversation between the user and another entity, and wherein the text that provides context for the content recommendation context comprises one or more conversational turns from the conversation.
Gliwa teaches wherein the content recommendation context is a conversation between the user and another entity ([Introduction] chat dialogues with annotated summaries), and
wherein the text that provides context for the content recommendation context comprises one or more conversational turns from the conversation ([4.2] for dialogues, we divide text into utterances (which is a natural unit in conversations)).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Ni with teachings of Gliwa by modifying the justifying recommendations using distantly labeled revies and fine-grained aspects as taught by Ni to include wherein the content recommendation context is a conversation between the user and another entity, and wherein the text that provides context for the content recommendation context comprises one or more conversational turns from the conversation as taught by Gliwa for the benefits of introducing a high-quality chat dialogues corpus, manually annotated with abstractive summarizations, which can be used by the research community for further studies (Gliwa [Abstract]).
Claims 20-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ni et al. (“Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects”; Nov. 3-7, 2019) in view of Lebret et al. (“Neural Text Generation from Structured Data with Application to the Biography Domain”; Sep. 23, 2016) and further in view of Appelt et al. (US 2003/0078766).
Claim 20,
Ni and Lebret teach all the limitations in claim 18. The difference between the prior art and the claimed invention is that Ni or Lebret explicitly teach wherein the content recommendation context is a response to a search query submitted by the user, and wherein the text that provides context for the content recommendation context comprises text of the search query submitted by the user.
Appelt teaches wherein the content recommendation context is a response to a search query submitted by the user ([0079] the system responds to the natural language query and produces outputs in response to the search; a natural language query summary of results found in response to the search is generated and the system succinctly answer the user’s query), and
wherein the text that provides context for the content recommendation context comprises text of the search query submitted by the user ([0014] sending/transmitting the natural language query from a client and processing it: the handheld computer can transmit a natural language query to the natural language query engine).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Ni with teachings of Appelt by modifying the justifying recommendations using distantly labeled revies and fine-grained aspects as taught by Ni to include wherein the content recommendation context is a response to a search query submitted by the user, and wherein the text that provides context for the content recommendation context comprises text of the search query submitted by the user as taught by Appelt for the benefits of creating a stored indexed text corpus adapted to permit natural language querying (Appelt [0011]).
Claim 21,
Appelt further teaches the method of claim 20, wherein the text that provides context for the content recommendation context comprises one or more previous search queries submitted by the user ([0085] generating a corpus of training queries that may be captured from natural language queries submitted from user search sessions).
Claim 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ni et al. (“Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects”; Nov. 3-7, 2019) in view of Lebret et al. (“Neural Text Generation from Structured Data with Application to the Biography Domain”; Sep. 23, 2016) and further in view of Zimovnov et al. (US 11,263,217).
Claim 23,
Ni and Lebret teach all the limitations in claim 22. The difference between the prior art and the claimed invention is that Ni nor Lebret explicitly teach wherein the text that provides context for the content recommendation context comprises data characterizing user interactions with the one or more previous content items that have been previously recommended to the user by the content recommendation system.
Zimovnov teaches wherein the text that provides context for the content recommendation context comprises data characterizing user interactions with the one or more previous content items that have been previously recommended to the user by the content recommendation system ([col. 18 lines 17-25] track interactions of users with content items that were previously recommended by the recommendation service (i.e. the items are previously recommended by the recommender and interaction with those items are tracked/stored)).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Ni with teachings of Zimovnov by modifying the justifying recommendations using distantly labeled revies and fine-grained aspects as taught by Ni to include wherein the text that provides context for the content recommendation context comprises data characterizing user interactions with the one or more previous content items that have been previously recommended to the user by the content recommendation system as taught by Zimovnov for the benefits of determining user-specific proportions of content for recommendation to a given user of a recommendation system (Zimovnov [Field]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Anderson et al. (US 2015/0178265) – The present disclosure relates to applying techniques similar to those used in neural network language modeling systems to a content recommendation system. For example, by associating consumed media content to words of a language model, the system may provide content predictions based on an ordering. Thus, the systems and techniques described herein may produce enhanced prediction results for recommending content (e.g. word) in a given sequence of consumed content. In addition, the system may account for additional user actions by representing particular actions as punctuation in the language model.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHREYANS A PATEL whose telephone number is (571)270-0689. The examiner can normally be reached Monday-Friday 8am-5pm PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Desir can be reached at 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
SHREYANS A. PATEL
Primary Examiner
Art Unit 2653
/SHREYANS A PATEL/ Examiner, Art Unit 2659