DETAILED ACTION
This examination is in response to the communication filed on 03/03/2026. Claims 1-3, 5-12, 14, 15 and 17-20 are currently pending, wherein claims 4 and 16 have been canceled and claims 1, 5, 11, 14 and 20 have been amended.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment/Arguments
Applicant’s amendments and arguments regarding the double patenting rejection of claims 1, 2, 14, 15 and 20 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn.
Applicant’s amendment/arguments with respect to the rejection of pending claims 1-3, 5-12, 14, 15 and 17-20 under §103 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-3, 5-7, 9, 12, 14, 15, 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Singh et al. (US 2022/0172024 A1; herein “Singh”) in view of Zhang (US 10,146,751 B1; herein “Zhang”)’ further in view of Rodgers et al. (US 11847246 B1; herein “Rodgers”) still further in view of Jayaraman et al. (US 10,459,962).
Regarding claims 1, 14 and 20, Singh teaches an apparatus, method, and computer program product for generating UGC transformed alert data from a monitoring service alert, the apparatus comprising at least one processor ( Fig. 2, processor 22) and at least one memory including program code (Fig. 2 memory 14; and ¶[0014] teaches “memory 14 for storing information and instructions to be executed by processor 22”), the at least one memory and the program code configured to, with the at least one processor, cause the apparatus to at least:
retrieve the monitoring service alert, wherein the monitoring service alert comprises a text string, including user generated content (UGC) text (¶[0044] teaches “the service ticket includes information on the service produce 502 at issue, as well as the service category 504 at issue. The information may be entered by the user in a freeform manner, or via a menu of possible options”);
utilize a parser to identify one or more first portions of the text string related to identifying a reported problem, and one or more second portions of the text string related to a description of the reported problem (¶[0064] teaches “At 702, the service ticket is parsed and the summary and description of the issue is extracted” the “summary” is interpreted as one or more first portions related to identifying a reported problem and the “description” is interpreted as one or more second portions related to a description of the reported problem);
compile the one or more first portions of the text string into an alert message problem component, and the one or more second portions of the text string into an alert auxiliary details component (¶[0064] teaches “At 702, the service ticket is parsed and the summary and description of the issue is extracted” and ¶[0048] teaches “…an embedding layer/lookup 302 receives the summary and description of the filed ticket as inputs…There are two input layers in the model, one for the description and one for summary…” Accordingly, Singh teaches the parsed data is compiled into two components, the summary component and the description component. The summary component is interpreted as an alert problem component and the description component is interpreted as an alert auxiliar details component);
generate an alert message problem embedding by applying a first feature extraction to the alert message problem component, (¶[0048] teaches “…an embedding layer/lookup 302 receives the summary and description of the filed ticket as inputs…There are two input layers in the model, one for the description and one for summary…which then outputs sequences of embedding vectors for the corresponding word positions as description embeddings (Demb) and summary embeddings (Semb)” );
generate an alert message description embedding by applying a second feature extraction to the alert auxiliary details component (¶[0048] teaches “…an embedding layer/lookup 302 receives the summary and description of the filed ticket as inputs…There are two input layers in the model, one for the description and one for summary…which then outputs sequences of embedding vectors for the corresponding word positions as description embeddings (Demb) and summary embeddings (Semb)”); and
generate the alert data based on the alert message problem embedding and the alert message description embedding (¶[0050] teaches “The output from the LSTMs layer 307, 308 is fed into an MHA layer 309, 310 and … two context vectors are generated, one for summary (Sc) and one for description (Dc). Sc and Dc are concatenated together…”).
However, the Singh fails to disclose that the parser is a semantic parser.
Zhang discloses a system and method for creating structured or semi-structed representations of information extracted from unstructured text data sources and includes, inter alia, identifying the types of information contained in the unstructured data. For a pre-defined target information types, the methods identify the context and content of the portions of the text that represents the target information type. (Zhang, Abstract). More specifically, Zhang teaches the method utilizes a semantic parser to identify one or more portions of a text string corresponding to predefined target information types (Fig. 2, element 211 teaches the text analysis includes a syntactic/sematic parser; and col. 8, lines 33-37 teaches “…The text analysis module includes algorithms that can function as a syntactic or semantic parser”).
Singh differs from the claimed invention, as defined by claims 1, 14, and 20, in that Singh fails to disclose utilizing a semantic parser to extract the summary and description portions. Semantic parsers for extracting information from text sources is known in the art as evidenced by Singh. Therefore, it would have been obvious to one having ordinary skill in the art, before the effective filing date of the invention, to have modified the service ticket processing system of Singh to include utilizing a semantic parser to extract the summary and description portions of the service ticket text as taught by Zhang as it merely constitutes the combination of known processes to achieve the predictable result of “identifying certain pre-defined types of information in the unstructured data source and extracting specific text content containing or representing such information” (Zhang, col.2, lines 25-28).
The combination Singh and Zhang fails to disclose that the output corresponds to UGC transformed alert data. Under a broadest reasonable interpretation, UGC transformed alert data is interpreted as tokenized/anonymized alert data.
Rodgers a system and method for communicating sensitive private information that includes, inter alia, replacing each of the one or more UGC data components with one or more generic data tokens based at least in part on a UGC type of the UGC data component (Col. 3, lines 33-43 teaches “the organization creates a token 106 that represents the marriage event label for training or inference to a machine learning system 104…The token provides a mechanism whereby the organization 102 can communicate event or attribute labels to the machine learning system 104 without revealing to the learning machine system 104 the meaning of the event or attribute labels”).
The combination of Singh and Zhang differs from the claimed invention, as defined by claims 1, 14, and 20, in that the combination fails to disclose transforming the UGC alert data. Anonymizing private or sensitive data utilized by machine learning systems is known in the art as evidenced by Rodgers. Therefore, it would have been obvious to one having ordinary skill in the art, before the effective filing date of the invention, to have modified the service ticket processing system taught by the combination of Singh and Zhang to include transforming/anonymizing the UGC data as taught by Rogers as it merely constitutes the combination of known processes to achieve the predictable result of allowing use of private/sensitive data by machine learning models without revealing to the learning machine system 104 the meaning of the event or attribute label (Rodgers, col.3, lines 39-43).
The combination of Singh, Zhang and Rodgers fails to disclose the first feature extraction utilizes a non-linear embedding technique to generate the alert message problem embedding, the second feature extraction utilizes a second embedding technique to generate the alert message description embedding, or that the second embedding technique is different from the non-linear embedding technique.
Jayaraman teaches a method and system for selectively generating word embeddings and paragraph embeddings representing text from subsets of fields in incident reports (Jayaraman, Abstract). More specifically, Jayaraman teaches the first feature extraction utilizes a non-linear embedding technique to generate the alert message problem embedding (col. 2, lines 58-66 teaches “The method additionally includes obtaining an ANN… such that, for each of the incident reports: (i) for words present in text strings of a first subset of the fields, the encoder can generate word vector representations within a semantically encoded vector space…” In addition, col. 19, lines 34-35 teaches “Functions other than the logistic function, such as the sigmoid of tanh functions, may be used instead” Thus, the ANN utilizes a non-linear activation function.),
the second feature extraction utilizes a second embedding technique to generate the alert message description embedding (col. 2, lines 58-66 teaches “…and (ii) for text strings of a second subset of the fields, the encoder can generate one or more paragraph vector representations within the semantically encoded vector space”), and that the second embedding technique is different from the non-linear embedding technique (col. 2, lines 58-66 teaches “The method additionally includes obtaining an ANN… such that, for each of the incident reports: (i) for words present in text strings of a first subset of the fields, the encoder can generate word vector representations within a semantically encoded vector space, and (ii) for text strings of a second subset of the fields, the encoder can generate one or more paragraph vector representations within the semantically encoded vector space” The non-linear word embeddings are separate and different from the paragraph (sentence) embeddings.
The combination of Singh, Zhang and Rodgers differs from the claimed invention, as defined in claims 1, 14 and 20, in that the combination fails to explicitly disclose utilizing separate and different embeddings for the different parts of the text within an incident report. Utilizing separate and different embedding functions are known in the art as evidenced by Jayaraman. Therefore, it would have been obvious to modify the system taught by the combination of Singh, Zhang and Rodgers such that the embedding layer 302 utilizes different non-linear embeddings, i.e., word embeddings, and paragraph(sentence) embeddings on the different components, i.e., summary and description, to reduce computational cost and improve the quality of the resulting word and paragraph vector representation (Jayaraman, col. 2, lines 23-36).
Regarding claims 2 and 15, the combination of Singh, Zhang, Rodgers and Jayaraman teaches all of the elements of claims 1 and 14 (see detailed element mapping above). In addition, Rodgers further teaches the apparatus is further configured to train an alert message machine learning model based on the UGC transformed alert data (Col. 3, lines 44-46 teaches “the machine learning system 104 is trained on the historical token data 106 provided by 102, in combination with a variety of additional data that the machine learning system has access to.”).
The combination of Singh and Zhang differs from the claimed invention, as defined by claims 2 and 15, in that the combination fails to disclose transforming the UGC alert data. Anonymizing private or sensitive data utilized by machine learning systems is known in the art as evidenced by Rodgers. Therefore, it would have been obvious to one having ordinary skill in the art, before the effective filing date of the invention, to have modified the service ticket processing system taught by the combination of Singh and Zhang to include transforming/anonymizing the UGC data as taught by Rodgers as it merely constitutes the combination of known processes to achieve the predictable result of allowing use of private/sensitive data by machine learning models while maintaining the user’s privacy.
Regarding claim 3, the combination of Singh, Zhang, Rodgers and Jayaraman teaches all of the elements of claim 1 (see detailed element mapping above). In addition, Singh further teaches the monitoring service alert is programmatically parsed based at least in part on a presence of an alert message delimiter (Under a broadest reasonable interpretation “alert message delimiters” corresponds to word, e.g., sequence or set of characters. This interpretation is consistent with ¶[0067] of the Specification which states “The term ‘alert message delimiters’ refers to any character, sequence of characters, or set of characters that may be contained in a text string”; ¶[0046] teaches “Then the texts are converted into lists of words and each word in the data is replaced by its position in the vocab”).
Regarding claim 5, the combination of Singh, Zhang, Rodgers and Jayaraman teaches all of the elements of claim 1 (see detailed element mapping above). In addition, Zhang further teaches the semantic parser comprises at least one of a slot grammar parser (Col. 8, lines 33-53 teaches with respect to the semantic parser “…The input text contents are first broken into sentences 212. Then each sentence is divided into a subject terms and a predicate term…” breaking the sentences into corresponding slots based on grammar is interpreted as a slot grammar parser) and a bidirectional long-short term memory (Bi-LSTM) based conditional random field (the “at least one of” language make this element optional ).
Singh differs from the claimed invention, as defined by claims 1, 14, and 20, in that Singh fails to disclose utilizing a semantic parser to extract the summary and description portions. Semantic parsers for extracting information from text sources is known in the art as evidenced by Singh. Therefore, it would have been obvious to one having ordinary skill in the art, before the effective filing date of the invention, to have modified the service ticket processing system of Singh to include utilizing a semantic parser to extract the summary and description portions of the service ticket text as taught by Zhang as it merely constitutes the combination of known processes to achieve the predictable result of “identifying certain pre-defined types of information in the unstructured data source and extracting specific text content containing or representing such information” (Zhang, col.2, lines 25-28).
Regarding claims 6 and 17, the combination of Singh, Zhang, Rodgers and Jayaraman teaches all of the elements of claims 1 and 14 (see detailed element mapping above). In addition, Rodgers further teaches segregating the monitoring service alert further comprises:
identifying one or more UGC data components of the text string of the monitoring service alert corresponding to the UGC text (Col. 3, lines 33-54 teaches “the organization creates a token 106 that represents the marriage event label for training or inference to a machine learning system 104…The token provides a mechanism whereby the organization 102 can communicate event or attribute labels to the machine learning system 104 without revealing to the learning machine system 104 the meaning of the event or attribute labels…this data may include, but is not limited to information about communications or relationships between different people in a group” token creation inherently requires identification of the components associated with the tokens; and
replacing each of the one or more UGC data components with one or more generic data tokens based at least in part on a UGC type of each of the one or more UGC data component (Col. 3, lines 33-43 teaches “the organization creates a token 106 that represents the marriage event label for training or inference to a machine learning system 104…The token provides a mechanism whereby the organization 102 can communicate event or attribute labels to the machine learning system 104 without revealing to the learning machine system 104 the meaning of the event or attribute labels”).
The combination of Singh and Zhang differs from the claimed invention, as defined by claims 6 and 17, in that combination fails to disclose transforming the UGC alert data. Anonymizing private or sensitive data utilized by machine learning systems is known in the art as evidenced by Rodgers. Therefore, it would have been obvious to one having ordinary skill in the art, before the effective filing date of the invention, to have modified the service ticket processing system taught by the combination of Singh and Zhang to include transforming/anonymizing the UGC data as taught by Rodgers as it merely constitutes the combination of known processes to achieve the predictable result of allowing use of private/sensitive data by machine learning models while maintaining the user’s privacy.
Regarding claim 7, the combination of Singh, Zhang, Rodgers and Jayaraman teaches all of the elements of claim 1 (see detailed element mapping above). In addition, Singh further teaches generating the alert message problem embedding further comprises performing one or more data mutation processes on the alert message problem component (¶[0046] teaches “Embodiments first clean the data by removing the non-alphanumeric characters…”).
Regarding claim 9, the combination of Singh, Zhang, Rodgers and Jayaraman teaches all of the elements of claim 1 (see detailed element mapping above). In addition, Jayaraman teaches the non-linear embedding technique to generate the alert message problem embedding comprises a word embedding technique (col. 2, lines 58-66 teaches “The method additionally includes obtaining an ANN… such that, for each of the incident reports: (i) for words present in text strings of a first subset of the fields, the encoder can generate word vector representations within a semantically encoded vector space…” In addition, col. 19, lines 34-35 teaches “Functions other than the logistic function, such as the sigmoid of tanh functions, may be used instead” Thus, the ANN utilizes a non-linear activation function.).
The combination of Singh, Zhang, and Rodgers differs from the claimed invention, as defined in claim 9, in that the combination fails to explicitly disclose utilizing separate and different embeddings for the different parts of the text within the incident report. Utilizing separate and different embedding functions are known in the art as evidenced by Jayaraman. Therefore, it would have been obvious to modify the system taught by the combination of Singh, Zhang and Rodgers such that the embedding layer 302 utilizes different non-linear embeddings, i.e., word embeddings, and paragraph(sentence) embeddings on the different components, i.e., summary and description, to reduce computational cost and improve the quality of the resulting word and paragraph vector representation (Jayaraman, col. 2, lines 23-36).
Regarding claim 12, the combination of Singh, Zhang, Rodgers and Jayaraman teaches all of the elements of claim 1 and (see detailed element mapping above). In addition, Jayaraman further teaches the second embedding technique to generate the alert message description embedding comprises utilizing a non-linear sentence embedding technique on the alert auxiliary details component (col. 2, lines 58-66 teaches “…and (ii) for text strings of a second subset of the fields, the encoder can generate one or more paragraph vector representations within the semantically encoded vector space”).
The combination of Singh, Zhang, and Rodgers differs from the claimed invention, as defined in claim 12, in that the combination fails to explicitly disclose utilizing separate and different embeddings for the different parts of the text within the incident report. Utilizing separate and different embedding functions are known in the art as evidenced by Jayaraman. Therefore, it would have been obvious to modify the system taught by the combination of Singh, Zhang and Rodgers such that the embedding layer 302 utilizes different non-linear embeddings, i.e., word embeddings, and paragraph(sentence) embeddings on the different components, i.e., summary and description, to reduce computational cost and improve the quality of the resulting word and paragraph vector representation (Jayaraman, col. 2, lines 23-36).
Claims 8, 10, 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Singh, Zhang, Rodgers and Jayaraman as applied to claims 1, 14 and 10 above, and further in view of Sumner et al. (US 20230083838 A1; herein “Sumner”).
Regarding claim 8, the combination of Singh, Zhang, Rodgers and Jayaraman teaches all of the elements of claim 7 (see detailed element mapping above). In addition, Singh teaches performing at least one data mutation process. However, the combination of Singh, Zhang, Rodgers and Jayaraman fails to explicitly disclose that the one or more data mutation processes comprise at least one of stopword removal and lemmatization.
Sumner teaches a natural language processing module and method that preprocesses text data using one or more data mutation processes including at least one of stopword removal and lemmatization (¶[0025] teaches “The NLP module 112 may also contain instructions for preprocessing text data for analysis, such as removing stop words, stemming, lemmatization, and the like.)
The combination of Singh, Zhang, Rodgers and Jayaraman differs from the claimed invention, as defined by claim 8, in that the combination fails to explicitly disclose that the data mutation process utilized to preprocess the text data includes at least one of stop word removal and lemmatization. Preprocessing text data by performing stop word removal and/or lemmatization is well known in the art as evidenced by Sumner. Therefore, it would have been obvious to modify the text preprocessing performed by the combination of Singh, Zhang, Rodgers and Jayaraman to include stop word removal and/or lemmatization as taught be Sumner as it merely constitutes the combination of known processes to achieve the predictable result of preprocessing text data.
Regarding claims 10 and 18, the combination of Singh, Zhang, Rodgers and Jayaraman teaches all of the elements of claims 9 and 14 (see detailed element mapping above). However, the combination fails to explicitly disclose that word embedding technique comprises extracting a bigram list and a trigram list from the alert message problem component, and generating bigram word embeddings and trigram word embeddings from the bigram list and the trigram list, and wherein the alert message problem embedding comprises the bigram word embeddings and the trigram word embeddings.
Sumner teaches the non-linear transformation comprises
extracting a bigram list and a trigram list from the alert message problem component (¶[0048] teaches “Keywords 412 (including words, bigrams, trigrams, and other n-grams) may be identified by supervised methods…”), and
generating bigram word embeddings and trigram word embeddings from the bigram list and the trigram list (¶[0049] teaches “The novice model 114 encodes the text data 202 into a numerical format by word embedding…the text data 202 encoded is only the identified keywords 412 to streamline the method 400.” ), and
wherein the alert message problem embedding comprises the bigram word embeddings and the trigram word embeddings (¶[0049] teaches “The novice model 114 encodes the text data 202 into a numerical format by word embedding…the text data 202 encoded is only the identified keywords 412 to streamline the method 400.” The keywords as interpreted as corresponding to the alert message problem).
The combination of Singh, Zhang, Rodgers and Jayaraman differs from the claimed invention, as defined by claims 10 and 18, in that the combination fails to explicitly disclose that the word embeddings include bigram and trigram embeddings. Generating word embeddings that include bigram and trigram embedding are well known in the art as evidenced by Sumner. Therefore, it would have been obvious to modify the text embedding performed by the combination of Singh, Zhang, Rodgers and Jayaraman to include identifying bigram and trigram lists corresponding to keywords as taught be Sumner as it merely constitutes the combination of known processes to achieve the predictable result of generating word embeddings.
Regarding claim 19, the combination of Singh, Zhang, Rodgers, Jayaraman and Sumner teaches all of the elements of claim 18 and (see detailed element mapping above). In addition, Jayaraman further teaches the second embedding technique to generate the alert message description embedding comprises utilizing a non-linear sentence embedding technique on the alert auxiliary details component (col. 2, lines 58-66 teaches “…and (ii) for text strings of a second subset of the fields, the encoder can generate one or more paragraph vector representations within the semantically encoded vector space”).
The combination of Singh, Zhang, and Rodgers differs from the claimed invention, as defined in claim 19, in that the combination fails to explicitly disclose utilizing separate and different embeddings for the different parts of the text within the incident report. Utilizing separate and different embedding functions are known in the art as evidenced by Jayaraman. Therefore, it would have been obvious to modify the system taught by the combination of Singh, Zhang and Rodgers such that the embedding layer 302 utilizes different non-linear embeddings, i.e., word embeddings, and paragraph(sentence) embeddings on the different components, i.e., summary and description, to reduce computational cost and improve the quality of the resulting word and paragraph vector representation (Jayaraman, col. 2, lines 23-36).
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Singh, Zhang, Rodgers, Jayaraman and Sumner as applied to claim 10 above, and further in view of Sengupta et al. (US 2023/0061731 A1; herein “Sengupta”).
Regarding claim 11, the combination of Singh, Zhang, Rodgers, Jayaraman and Sumner teaches all of the elements of claim 10 (see detailed element mapping above). In addition, Sumner further teaches the apparatus is further configured to generate an inverse document frequency score for each bigram in the bigram list and each trigram in the trigram list (¶0026] teaches “The novice model 114 may use any available encoding techniques to extract the word embeddings from the text data…Such techniques include GloVe, TF-IDF, word2vec, and any other known word embedding algorithm”; ¶[0048] teaches “Keywords 412 (including words, bigrams, trigrams, and other n-grams) may be identified by supervised methods…”; and ¶[0049] teaches “The novice model 114 encodes the text data 202 into a numerical format by word embedding…the text data 202 encoded is only the identified keywords 412 to streamline the method 400.”).
However, the combination of Singh, Zhang, Rodgers, Jayaraman and Sumner fails to explicitly disclose storing the inverse document frequency score with the bigram word embeddings and the trigram word embeddings, respectively as recited in amended claim 11.
Sengupta teaches methods and systems for recognizing significant words in unstructured text that includes inter alia, storing the inverse document frequency score with the bigram word embeddings and the trigram word embeddings, respectively (¶[0101] teaches “…the label-based feature data object for an unstructured textual data object is generated based at least in part on character co-occurrences within the word-level tokens 602 assigned with a significance token label. Specifically, n-gram Term Frequency-Inverse Document Frequency (TF-IDF) may be used to generate a label-based feature data object…2-gram TF-IDF may result in a label-based feature with weights for character features for “dr”, “jo”, “oh”, “hn”, and so one…the label-based feature data object comprises features extracted from the word-level tokens 602 labelled as significant based at least in part on 1-grm, 2-gram, and 3-gram TF-IDF” and ¶[0102] teaches “…In some embodiments, the embeddings generated from the Word2Vec techniques may be aggregated or combined with the label-based feature data object generated based at least in part on character co-occurrences” )
The combination of Singh, Zhang, Rodgers, Jayaraman and Sumner differs from the claimed invention, as defined by claim 11, in that the combination fails to explicitly disclose storing the inverse document frequency score with the bigram and trigram embeddings, respectively. Storing n-gram TD-IDF scores with corresponding word embeddings is known in the art as evidenced by Sengupta. Therefore, it would have been obvious to modify the text embedding performed by the combination of Singh, Zhang, Rodgers, Jayaraman, and Sumner to include storing/combining the inverse document frequency scores with the respective word embeddings as taught be Sengupta as it merely constitutes the combination of known processes to achieve the predictable result of using IDF scores to weight the word embeddings.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PENNY L CAUDLE whose telephone number is (703)756-1432. The examiner can normally be reached M-Th 8:00 am to 5:00 pm eastern.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at 571-272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PENNY L CAUDLE/Examiner, Art Unit 2657
/DANIEL C WASHBURN/Supervisory Patent Examiner, Art Unit 2657