DETAILED ACTION
This office action is in response to Applicant’s Amended submission filed on 12/17/2025.
Response to Amendment and Arguments
Specification objection
The amendment addressed the issue, therefor the objection has been withdrawn.
Claim objection
The amendment addressed the issue, therefor the objection has been withdrawn.
35 U.S.C. 112 Rejections
The amendment and argument are persuasive, therefor the rejection has been withdrawn.
35 U.S.C. 101 Rejections
The amendment and argument are persuasive, therefor the rejection has been withdrawn.
35 U.S.C. 103 Rejections
Applicant’s arguments are moot in view of the new or modified grounds of rejection that were necessitated by the amendments to the Claims.
Applicant’s arguments are directed to material that is added by the most recent amendments to the independent Claims. Response, p. 14.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
Claims 1-5, and 14-17 are rejected under 35 U.S.C. 103 as being unpatentable over Gao, T., Yao, X., & Chen, D. (2021). Simcse: Simple contrastive learning of sentence embeddings. arXiv preprint arXiv:2104.08821., in view of Duan (US 20190163692), and further in view of Liu (US 20230244727).
Regarding Claim 1, Gao discloses: 1. A computer-implemented method for training a machine learning model for sentence pair matching in natural language processing, the computer-implemented method comprising: ([pg. 2 sect 1] conduct a comprehensive evaluation of SimCSE on seven standard semantic textual similarity (STS) task. [pg. 1, sect 1] Our supervised SimCSE builds upon the recent success of using natural language inference (NLI) datasets for sentence embeddings. Pg.1, sect 1, In this work, we advance state-of-the-art sentence embedding methods and demonstrate that a contrastive objective can be extremely effective when coupled with pre-trained language models such as BERT, Our supervised SimCSE builds upon the recent success of using natural language inference (NLI) datasets for sentence embeddings (Conneau et al.,2017; Reimers and Gurevych, 2019) and incorporates annotated sentence pairs in contrastive learning (Figure 1(b)).)
preparing sentence pairs from a training dataset, wherein each sentence pair comprises a pairing of a ([pg. 1, sect 1] Our supervised SimCSE builds upon the recent success of using natural language inference (NLI) datasets for sentence embeddings and incorporates annotated sentence pairs in contrastive learning (Figure 1(b)). Also see sect 4, supervised SimcCSE –discloses details of trainings. “we extend (xi, xi+) to (xi, xi+, xi-) where xi is the premise, xi+ and xi- are entailment
and contradiction hypotheses” Sentence pair is discussed in detail, (xi) represent as the search string, the entailment hypothesis (xi+) represent the responsive target document, and the contradiction hypothesis (xi-) is a non-responsive one. The reference notes this data comes from "NLI datasets" (Natural Language Inference), which represents a training dataset.)
ranking the sentence pairs based on an amount of similarity between the search string and the target document; ([pg. 5, sect 4] discloses a training objective function that inherently ranks the sentence pairs by similarity. The formula/objective function (see eq.5) includes the term sim(hi, hi+), which is the similarity between the premise (xi), the "search string") and the entailment hypothesis (xi+), the "target document"). The loss function is to encourage high similarity for positive pairs and low similarity for negative pairs, which is a form of implicit ranking.
identifying an outmatched sentence pair, wherein the target document of the outmatched sentence pair is a non-responsive document to the search string; ([pg. 5, sect 4] "contradiction pairs as hard negatives." The contradiction hypothesis (xi-) is a non-responsive document to the premise (xi), the search string).) [Contradiction pair reads on the outmatched sentence pair.]
and utilizing the outmatched sentence pair to tune a parameter of a natural language processing model to generate a trained model. ([pg. 5, sect 4] “adding hard negatives can further improve performance.” The objective function (eq.5) explicitly uses the hard negative contradiction hypothesis in the denominator: e sim(hi,hj-)/τ. The training objective is defined to minimize this loss function, thereby utilizing the outmatched sentence pair (the hard negative) to tune a parameter of the SimCSE model. The result of this tuning is the final supervised SimCSE which uses BERT, which is the trained model.)
Although Gao’s disclosure of sentence comparison can be applied to search string and target document, it does not explicitly disclose that the comparison of the sentence is between search string or query to target document.
Duan (in the related field of improving chatbot response to user query) discloses: sentence pair comprises a search string and a target document ([0003] In accordance with implementations of the subject matter described herein, a new approach for presenting a response to a message in a conversation is proposed. Generally speaking, in response to receiving a message in a conversation, the message will be matched with one or more documents on the sentence basis. That is, the received message is compared with the sentences from a document(s), rather than predefined query-response pairs.)
Duan also discloses ranking candidate sentences in para 0025, which obviously could be applied to ranking sentence pairing as well.
Gao and Duan are considered analogous art. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Gao to combine the teaching of Duan for the above mentioned features, because by using sentences from a document rather than Q-R pairs, the adaptability of the chatbot system on different chatting topics is significantly improved. Moreover, the sentences coming from a document(s) make the responses meaningful and satisfying. Thus, a more suitable response can be presented in the conversation (Duan, [0019]).
Gao and Duan does not explicitly disclose artificially deflating a similarity score of the outmatched sentence pair.
Liu discloses: artificially deflating a similarity score of the outmatched sentence pair; ([0090] Advantageously, the training event samples 446 having negative engagement 542 are not eliminated from the search results 380—rather, they may be assigned lower values and given less weight in comparison to the training samples having positive engagement 541. Labeling the training event samples 446 in this manner reduces the noise associated with the raw search event data and can improve the learning of the personalized ranking model 360.) Also see para 0094. [the reference discloses used both positive and negative samples, applies differential weighting and adjust model parameters, where the goal/objective is to learn an embedding space where similar items are close together, and dissimilar items are far apart.]
Gao/ Duan/Liu are considered analogous art. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Gao/Duan to combine the teaching of Liu for the above mentioned features, because the machine learning model would be trained to differentiate between good and bad results (Liu, [0090]).
Regarding Claim 14, Gao in view of Duan discloses: 14. A computer program product for training a natural language processing model for search a knowledge database for a response to a query, the computer program product comprising a non-transitory computer readable storage medium having computer executable instructions embodied therewith, the computer executable instructions executable by one or more processors to cause the one or more processors to: (Although it could be imply that Gao is using computer to implement their disclosure, but Duan explicitly discloses in [0101] Implementations of the subject matter described herein may further include one or more computer program products being tangibly stored on a non-transient machine-readable medium and comprising machine-executable instructions. The instructions, when executed on a device, causing the device to carry out one or more processes as described above.)
As for the rest of the claim, they recite the elements of Claim 1, and therefore the rationale applied in the rejection of claim 1 is equally applicable.
Regarding Claim 2, Gao/Duan/Liu discloses all the limitation of Claim 1 (See detail mapping from above).
Gao further discloses: further comprising: generating training pairs to tune the parameter of the natural language processing model, ([pg. 4, sect 4] Supervised SimCSE: Choices of labeled data. We first explore which supervised datasets are especially suitable for constructing positive pairs (xi, xi+). Eq. 5 shows a training objective function which is tuning to improve model performance.)
wherein the training pairs comprise a positive data sample and a negative data sample. ([pg. 1, sect 1] Our supervised SimCSE builds upon the recent success of using natural language inference (NLI) datasets for sentence embeddings and incorporates annotated sentence pairs in contrastive learning (Figure 1(b)). Also see pg. 5, sect 4, supervised SimcCSE –discloses details of trainings. “we extend (xi, xi+) to (xi, xi+, xi-) where xi is the premise, xi+ and xi- are entailment and contradiction hypotheses” Sentence pair is discussed in detail, (xi) represent as the search string, the entailment hypothesis (xi+) represent the responsive target document which is positive sample, and the contradiction hypothesis (xi-) is a non-responsive one which is negative sample. The reference notes this data comes from "NLI datasets" (Natural Language Inference), which represents a training dataset.)
Regarding Claim 3, Gao/Duan/Liu discloses all the limitation of Claim 2 (See detail mapping from above).
Gao in view of Duan/Liu further discloses: wherein the positive pairing sample is a first sentence pair comprising the search string and a responsive document, ([pg. 4, sect 4] Supervised SimCSE: Choices of labeled data. We first explore which supervised datasets are especially suitable for constructing positive pairs (xi, xi+). Where Duan already discloses search string and responsive document from Claim 1.
wherein the negative pairing sample is a second sentence pair comprising the search string and the non-responsive document, ([pg. 5, sect 4]“we extend (xi, xi+) to (xi, xi+, xi-) where xi is the premise, xi+ and xi- are entailment and contradiction hypotheses” Sentence pair is discussed in detail, (xi) represent as the search string, the entailment hypothesis (xi+) represent the responsive target document which is positive sample, and the contradiction hypothesis (xi-) is a non-responsive one which is negative sample.)
Liu further discloses: wherein the positive pairing sample is characterized by an artificially inflated similarity score, ([0090] Advantageously, the training event samples 446 having negative engagement 542 are not eliminated from the search results 380—rather, they may be assigned lower values and given less weight in comparison to the training samples having positive engagement 541. Labeling the training event samples 446 in this manner reduces the noise associated with the raw search event data and can improve the learning of the personalized ranking model 360.) Also see para 0094. [the reference discloses used both positive and negative samples, applies differential weighting and adjust model parameters, where the goal/objective is to learn an embedding space where similar items are close together, and dissimilar items are far apart. Also since negative engagement sample can be given less weight, it would be obvious that positive sample can be given more weights, such as inflating it.]
and wherein the negative pairing sample is characterized by an artificially deflated similarity score. ([0090] Advantageously, the training event samples 446 having negative engagement 542 are not eliminated from the search results 380—rather, they may be assigned lower values and given less weight in comparison to the training samples having positive engagement 541. Labeling the training event samples 446 in this manner reduces the noise associated with the raw search event data and can improve the learning of the personalized ranking model 360.) Also see para 0094. [the reference discloses used both positive and negative samples, applies differential weighting and adjust model parameters, where the goal/objective is to learn an embedding space where similar items are close together, and dissimilar items are far apart.]
Where the rationale for the combination would be similar to the one already provided.
Regarding Claim 4, Gao/Duan/Liu discloses all the limitation of Claim 3 (See detail mapping from above).
Gao further discloses: wherein the second sentence pairing is ranked higher than the first sentence pairing as a result of the ranking. [in view of the spec, this claim can be read as the unresponsive or hard negative (contradiction pair) can actually rank higher than the positive or true paring] ([pg. 5, sect 4] Contradiction as hard negatives. Finally, we further take the advantage of the NLI datasets by using its contradiction pairs as hard negatives. In NLI datasets, given one premise, annotators are required to manually write one sentence that is absolutely true (entailment), one that might be true (neutral), and one that is definitely false (contradiction). Therefore, for each premise and its entailment hypothesis, there is an accompanying contradiction hypothesis (see Figure 1 for an example).) [A hard negative is a specific type of negative example that the model incorrectly scores/ranks too highly. It is a document/response that, despite being irrelevant or unresponsive, appears relevant to the model for some reason, causing it to be "outmatched" or ranked higher than the correct, relevant document/response. In an NLI dataset, a premise-hypothesis pair labeled as a "contradiction" is specifically designed to be logically false, given the premise. Because these hypotheses are often manually written to be subtle or challenging, they act as high-quality, pre-defined hard negatives. The concept in both the instant application and the primary reference is to deliberately train the model with challenging negative examples to make it more robust.]
Regarding Claim 5, Gao/Duan/Liu discloses all the limitation of Claim 4 (See detail mapping from above).
Gao further discloses: wherein the natural language processing model is executed to perform the ranking of the sentence pairs, ([pg. 5, sect 4] discloses a training objective function that inherently ranks the sentence pairs by similarity. The formula/objective function (see eq.5) includes the term sim(hi, hi+), which is the similarity between the premise (xi), the "search string") and the entailment hypothesis (xi+), the "target document"). The loss function is to encourage high similarity for positive pairs and low similarity for negative pairs, which is a form of implicit ranking.
Duan further discloses: and wherein the ranking generates a first initial similarity score for the first sentence pair and a second initial similarity score for the second sentence pair. ([0053] A ranking algorithm in the LTR model 422 may take a plain text and reference QA pairs in the reference QA database as inputs, and compute similarity scores between the plain text and each reference QA pair through at least one of word matching and latent semantic matching.)
Where the rationale for the combination would be similar to the one already provided.
Claim 15 is a computer program product claim that correspond to claim 3 and is rejected under similar rationale.
Claim 16 is a computer program product claim that correspond to claim 4 and is rejected under similar rationale.
Claim 17 is a computer program product claim that correspond to claim 5 and is rejected under similar rationale.
Claims 6, 9, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Gao in view of Duan/Liu, and furthermore in view of Willis (US 20130031216).
Regarding Claim 6, Gao/Duan/Liu discloses all the limitation of Claim 5 (See detail mapping from above).
Gao in view of Duan and Liu does not explicitly discloses: further comprising: generating the artificially inflated similarity score by increasing the first initial similarity score by a first defined amount; and generating the artificially deflated similarity score by decreasing the second initial similarity score by a second defined amount.
Willis (in the related field of determining content matching) discloses: further comprising: generating the artificially inflated similarity score by increasing the first initial similarity score by a first defined amount; ([0152] If the traits match, then at step 536, a matching score or similarity score between the first user and second user may be increased by an amount proportional to the retrieved weight for the trait.)
and generating the artificially deflated similarity score by decreasing the second initial similarity score by a second defined amount. ([0152] Similarly, at step 538, if the traits do not match, then a matching score or similarity score between the first user and second user may be decreased by an amount proportional to the retrieved weight for the trait.)
Gao/Duan/Liu/Willis are considered analogous art. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Gao/Duan/Liu to combine the teaching of Willis for the above mentioned features, because by prioritizing important traits, the matching algorithm can more accurately predict the quality and relevance of a match, leading to better user experience (Willis, [0152]).
Claim 18 is a computer program product claim that correspond to claim 6 and is rejected under similar rationale.
Regarding Claim 9, Gao/Duan/Liu/Willis discloses all the limitation of Claim 6 (See detail mapping from above).
Liu further discloses: validating the outmatched sentence pair and a matched sentence pair with the trained model to evaluate an accuracy metric characterizing the trained model’s ability to identify target documents that are responsive to the search string. ([0090] Advantageously, the training event samples 446 having negative engagement 542 are not eliminated from the search results 380—rather, they may be assigned lower values and given less weight in comparison to the training samples having positive engagement 541. Labeling the training event samples 446 in this manner reduces the noise associated with the raw search event data and can improve the learning of the personalized ranking model 360.) Also see para 0094. [by assigning lower values/less weight to negative engagement samples (irrelevant documents) rather than eliminating them, the model learns a better understanding of relevance. This refined training process contributes to the model's ability to accurately distinguish between relevant and irrelevant document pairs during the validation phase.]
The rationale for the combination would be similar to the one already provided.
Claim 20 is a computer program product claim that correspond to claim 9 and is rejected under similar rationale.
Claims 7-8, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Gao in view of Duan, further in view of Liu, furthermore in view of Willis, and Spoliansky (US 20220414447).
Regarding Claim 7, Gao/Duan/Liu/Willis discloses all the limitation of Claim 6 (See detail mapping from above).
Gao in view of Duan/Liu/Willis does not explicitly discloses: further comprising: discarding from the training dataset expected result sentence pairs to generate a revised training dataset, wherein expected result sentence pairs are the sentence pairs positioned in a predefined top portion of the ranking and comprise one or more responsive documents to the search string.
Spoliansky (in the related field of curriculum learning for neural networks) discloses: further comprising: discarding from the training dataset expected result sentence pairs to generate a revised training dataset, ([0020] Accordingly, a portion and/or percentage of those labeled data candidates which the neural network correctly classified during the given training epoch can be dropped out and/or otherwise removed, such that they are not present in the next training epoch.)
wherein expected result sentence pairs are the sentence pairs positioned in a predefined top portion of the ranking and comprise one or more responsive documents to the search string. ([0015] When curriculum learning is implemented, the series of training epochs can be structured, ordered, and/or otherwise organized such that the training epochs get progressively more difficult over time. That is, training epochs that occur earlier in the series of training epochs can contain labeled data candidates which are considered to be easier for the neural network to accurately classify, while training epochs that occur later in the series of training epochs can contain labeled data candidates which are considered to be harder and/or more complicated for the neural network to accurately classify. When easier training epochs are performed before more difficult training epochs, the neural network can more steadily and incrementally improve in classification accuracy.) [documents and searching string already been previous disclosed by Duan in claim 1, sentence pair already disclose by Gao in claim 1. The primary concept here is that the most obvious or easiest samples can be replaced with harder or more difficult samples in training as is the case according to the specification of the instant application and the reference cited here.]
Gao/Duan/Liu/Willis/Spoliansky are considered analogous art. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Gao/Duan/Liu/Willis to combine the teaching of Spoliansky for the above mentioned features, because curriculum learning can be automatically facilitated by the computerized tool described herein without relying upon and/or otherwise requiring that the training epochs be manually structured and/or organized in order of increasing difficulty (Spoliansky, [0035]).
Regarding Claim 8, Gao/Duan/Liu/Willis/Spoliansky discloses all the limitation of Claim 7 (See detail mapping from above).
Gao further discloses: further comprising: tuning the trained model using the revised training data. ([pg. 5, sect 4] Contradiction as hard negatives. (“adding hard negatives can further improve performance.” The objective function (eq.5) explicitly uses the hard negative contradiction hypothesis in the denominator: e sim(hi,hj-)/τ. The training objective is defined to minimize this loss function, thereby utilizing the outmatched sentence pair (the hard negative) to tune a parameter of the SimCSE model. The result of this tuning is the final supervised SimCSE, which is the trained model.)[revised training data is disclosed by Spoliansky from claim 7]
Claim 19 is a computer program product claim that correspond to claim 7 and is rejected under similar rationale.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Duan in view of Wu (US 20200042597), and further in view of Liu (already of record)
Regarding Claim 10, Duan discloses: 10. A chatbot system, comprising: (see fig. 1, a chatbot system)
a virtual assistant (see fig. 1, a chatbot engine is a virtual assistant), implemented on at least one processor, (see fig. 7, processing units (710)) that employs the trained machine learning model ([0055] a machine learning ranking model may be trained to rank a plurality of sentences.) identify content data from a (([0003] in response to receiving a message in a conversation, the message will be matched with one or more documents on the sentence basis. That is, the received message is compared with the sentences from a document(s), rather than predefined query-response pairs. [0039] At 304, similarities between the received message and the subset of sentences are determined at a plurality of levels, which are used as metrics of the relevance between the received message and the subset of sentences.) Also see para 0055.
and an article attribute, wherein the article attribute is at least one of a content attribute ([0003] in response to receiving a message in a conversation, the message will be matched with one or more documents on the sentence basis. That is, the received message is compared with the sentences from a document(s), rather than predefined query-response pairs.)
Duan does not explicitly disclose knowledge database and similarity score.
Wu (in the related field of generating Q/A pairs for chatbot) discloses: knowledge database. ([0032] The chatbot server 130 may connect to or incorporate a chatbot database 140. The chatbot database 140 may comprise information that can be used by the chatbot server 130 for generating responses.)
Similarity score ([0053] rank the reference QA pairs based on the similarity scores.)
Duan and Wu are considered analogous art. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Duan to combine the teaching of Wu for the above mentioned features, because incorporating a knowledge database gives a chatbot access to organized, reliable information, allowing it to provide instant, accurate, and consistent answers to user queries 24/7, thereby enhancing customer satisfaction and reducing operational costs (Wu, [0032]).
Duan and Wu does not explicitly disclose a machine learning model, stored in memory, that has been trained by adjusting one or more parameter weight values based on an inflated similarity score of a positive pair sample and a deflated similarity score of a negative pair sample.
Liu discloses: a machine learning model, stored in memory, that has been trained by adjusting one or more parameter weight values based on an inflated similarity score of a positive pair sample and a deflated similarity score of a negative pair sample; ([0090] Advantageously, the training event samples 446 having negative engagement 542 are not eliminated from the search results 380—rather, they may be assigned lower values and given less weight in comparison to the training samples having positive engagement 541. Labeling the training event samples 446 in this manner reduces the noise associated with the raw search event data and can improve the learning of the personalized ranking model 360. [0094] In some embodiments, the values of labels assigned to training event samples 446 having negative engagement may generally be lower than the values of training event samples 446 having positive engagement.)
Duan/Wu/Liu are considered analogous art. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Duan/Wu to combine the teaching of Liu for the above mentioned features, because the machine learning model would be trained to differentiate between good and bad results (Liu, [0090]).
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Duan in view of Wu and Liu, and further in view of Chandrasekaran (US 20160196490).
Regarding Claim 11, Duan/Wu/Liu discloses all the limitation of Claim 10 (See detail mapping from above).
Duan/Wu/Liu does not explicitly disclose wherein the virtual assistant comprises: a knowledge database preparer that generates the knowledge database to include a plurality of articles that include the content data, the search attribute, and the filter attribute; and an indexer configured to index the knowledge base based on semantic characteristics of text data comprised within the knowledge base.
Chandrasekaran (in the related field of question and answering system) discloses: wherein the virtual assistant comprises: a knowledge database preparer that generates the knowledge database to include a plurality of articles that include the content data, the search attribute, and the filter attribute; ([0002] The ingestion content recommendation engine uses the extracted variables and context information to mine the interaction history to identify low confidence/quality answers that meet specified answer deficiency criteria (e.g., low confidence, no answer, negative sentiment, repeated questions, absence of evidence, answers with a certain confidence threshold for a given class of users, etc.) to find and filter relevant content in one or more content sources (e.g., enterprise content management or knowledge management system repositories) that will improve the quality of the answer, and to recommend the resulting content for ingestion into the knowledge database corpus used by the QA system, The ingestion content recommendations may include, for each recommendation, a link to the recommended source document and reasons for making the recommendation. In this way, the domain expert or system knowledge expert can review and evaluate the ingestion content recommendations to select one or more recommended source documents for ingestion into the natural language-based QA system.)[“low confidence/quality answer” describes search attributes, The engine uses this attribute, along with others like "no answer," "negative sentiment," and "repeated questions," to mine interaction history and pinpoint deficiencies, which then drives the search for relevant content to enhance the knowledge base.]
and an indexer configured to index the knowledge base based on semantic characteristics of text data comprised within the knowledge base. ([0039] the association process at step 312 may apply other topic extraction methods, such as Latent Semantic Analytics (LSA) (a.k.a., Latent Semantic Indexing (LSI)), to perform a singular value decomposition (SVD) or similar dimensionality reduction technique to automatically match a selected interaction to one or more topics, As a result of the processing step 312, each question and answer interaction may be identified or viewed as a collection of one or more topics from a specified topical hierarchy.)
Duan, Wu, Liu and Chandrasekaran are considered analogous art. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Duan/Wu/Liu to combine the teaching of Chandrasekaran for the above mentioned features, because the propose system improves the quality of answer provided by the Q/A system (Chandrasekaran, [0002]).
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Duan in view of Wu and Liu, further in view of Chandrasekaran, and furthermore in view of Sewak (US 20240370484).
Regarding Claim 12, Duan/Wu/Liu/Chandrasekaran discloses all the limitation of Claim 11 (See detail mapping from above).
Duan further discloses: a machine leaning model ([0055] The machine learning ranking model can be trained with the question-answer pairs crawled from community websites.)
similarity score threshold ([0065] If the received message satisfies the first condition, at 404, it is determined whether the relevance between the received message and the sentence satisfies a second condition. For example, if a degree of the relevance exceeds a threshold degree, the relevance satisfies the second condition.)
Duan/Wu/Liu/Chandrasekaran does not explicitly disclose an application program interface that executes a machine learning model to search the knowledge database for an article comprising the content data that is related to the query by a defined similarity score threshold.
Sewak (in the related field of labeling of text data) discloses: an application program interface that executes a machine learning model to search the knowledge database for an article comprising the content data that is related to the query by a defined similarity score threshold. ([0184] An embodiment uses an API version of a search engine. The search engine determines a block of ranked retrieval results including a rank for each result and a search score for each result, and a text snippet that samples the document at a location relevant to the query.)
Duan/Wu/Liu/Chandrasekaran/Sewak are considered analogous art. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Duan/Wu/Liu/Chandrasekaran to combine the teaching of Sewak for the above mentioned features, because an API provides a standardized interface for interacting with the ML model, making it easy to integrate into search application (Sewak, [0184]).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Duan in view of Wu and Liu, further in view of Chandrasekaran and Sewak, and furthermore in view of Sharma (US 20230059979).
Regarding Claim 13, Duan/Wu/Liu/Chandrasekaran/Sewak discloses all the limitation of Claim 12 (See detail mapping from above).
Duan/Wu/Liu/Chandrasekaran/Sewak does not explicitly disclose further comprising: an integrator that executes a fulfillment code to generate a customizable response to the query based on the identified content data.
Sharma (in the related field of artificial intelligence (AI)-based virtual assistants are provided for use with call or contact centers using AI-enabled smart machines and devices.) discloses: an integrator that executes a fulfillment code to generate a customizable response to the query based on the identified content data. ([0026] The fulfillment unit 110 may be developed using JavaScript to leverage the identified intents as determined by the NLU platform 106 and then call out appropriate component(s) to provide a dynamic response based on the request. ... Upon receiving a response, the fulfillment unit 110 may customize and format the response based on user 116 parameters, virtual assistant 104 parameters, and/or smart device 102 parameters to deliver a natural and smooth user experience.)
Duan/Wu/Liu/Chandrasekaran/Sewak/Sharma are considered analogous art. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Duan/Wu/Liu/Chandrasekaran/Sewak to combine the teaching of Sharma for the above mentioned features, because the system delivers a natural, smooth and customize experience for the user (Sharma, [0026]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Veit US 20230111978 – disclose a stochastic negative mining to cross example negative mining. “In some implementations, the proposed method further can allow an extension of the concept of stochastic negative mining to cross-example negative mining. Instead of mining the most informative negative documents only for the given query, non-matching pairs can be selected with the highest similarity score, even if they are for a different query.” See Abstract, para 0008, 0034, 0037, 0047,0073 and fig. 1-7 for additional details.
Yang US 20180307720 – disclose group tagging and clustering arrangement involving negative samples and positive samples, see para 0043 for additional details.
Chung US 20210089904 – disclose “That is, the adversarial perturbation values (for example, r.sub.i−1.sup.adv and r.sub.i.sup.adv in FIG. 2) may be used as information for intentionally decreasing similarity between the input word embedding value (for example, w.sub.i−1 of FIG. 2) and the target word embedding value (for example, w.sub.i of FIG. 2).” See para 0108 for additional details.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Phillip H Lam whose telephone number is (571)272-1721. The examiner can normally be reached 9 AM-3 PM Pacific Time.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached on (571) 272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PHILIP H LAM/ Examiner, Art Unit 2656
/BHAVESH M MEHTA/ Supervisory Patent Examiner, Art Unit 2656