Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over US20230136527A1 to Zhang et al and US20220180190A1 to Verma et al.
Zhang teaches claims 1, 8 and 15. A method, the method comprising:
performing, using a prompt encoder, prompt learning on an input data set to generate a revised text pattern; (Zhang fig. 7 and para 95 “Tokens are dynamically masked during batch training, and the sentence ui and ūi are input together to a single encoder during the batch training.” The tokens input data set. The encoding at the output of the encoder is the revised text pattern. This encoder is called an encoded text phrase in para 37 of Zhang. The encoding is also in Zhang fig. 2 “Encode text phrase” 210.)
processing, using a text (Zhang fig. 2 identify intent 215. Zhang para 37 “the system identifies intent of the encoded text phrase.” During training this intent is compared to existing data to calculate “intent classification loss”. Zhang para 101. This intent classification loss is the fused data set.)
updating a machine learning system with the fused data set. (Zhang para 102 “At operation 730, the system fine-tunes the network based on the second contrastive learning loss and the intent classification loss.” Fine tuning is updating an ML system.)
Zhang doesn’t have a Generative Adversarial Network (GAN).
However, Verma teaches using a text generative adversarial network. (Verma para 26 “The adapted GAN can be used to classify the intent of some user input…”)
Zhang, Verma and the claims are all directed to intent classification. It would have been obvious to a person having ordinary skill in the art, at the time of filing, to use a GAN to recognize intent because the classifier/discriminator and the generator “can be trained as adversaries such that improvements in Discriminator output feeds back to improve the ability of the Generator to simulate the true data.” Verma para 32.
Zhang teaches claims 2, 9 and 16. The method of claim 1, wherein the performing and processing operations further comprise preserving an original meaning of the input data set while utilizing a style of the existing data set. (Zhang para 16 “a contrastive pre-learning component can train a neural network to discriminate semantically similar utterances in a training dataset without using any labeled examples.” The similar utterances are the style of the existing data set. Original meaning is the intent. The intent is preserved from encoding to classification, see Zhang fig. 2 and Fig. 7.)
Zhang teaches claims 3, 10 and 17. The method of claim 1, wherein the performance of the prompt learning matches the input data set with the existing data set using a user-defined or auto-generated template. (Zhang para 96 “two input vectors h; and h i, h i represents the representation of sentence ūi, where ūi is from the same sentence ui but few (10%) tokens are randomly masked. Tokens are dynamically masked during batch training, and the sentence ui and ūi are input together to a single encoder during the batch training.” The random mask is the auto-generated template.)
Zhang teaches claims 4, 11 and 18. The method of claim 1, wherein the updating operation further comprises obtaining corresponding meanings of new words of the input data set based on the fused data set. (Examiner interprets new words to mean the input words. Zhang para 102 “At operation 730, the system fine-tunes the network based on the second contrastive learning loss and the intent classification loss.” Fine tuning is updating an ML system. The newly classified intent is compared to known classified intent and the loss is called the intent classification loss. The newly classified intent is the claimed obtained meaning of new words. The new meaning is based on the fused data set because the classifier has been trained on classification loss, and classification loss is the fused data set.)
Zhang teaches claims 5, 12 and 19. The method of claim 1, wherein the processing the revised text pattern further comprises fine-tuning the revised text pattern to generate the fused data set. (Zhang para 101 “At operation 725, the system computes an intent classification loss. In some cases, the operations of this step refer to, or may be performed by, a fine-tuning…” Zhang para 102 “At operation 730, the system fine-tunes the network based on the second contrastive learning loss and the intent classification loss.”)
Zhang teaches claims 6, 13 and 20. The method of claim 1, further comprising constructing the prompt encoder and configuring the prompt encoder to construct the revised text pattern according to new words in the input data set, (Zhang para 97 “At operation 710, the system computes a mask language modeling loss. In some cases, the operations of this step refer to, or may be performed by, a pre-training component…” Zhang para 98 “At operation 715, the system trains the network based on the first contrastive learning loss and the mask language modeling loss.” The network is the encoder and the intent classifier.) the revised text pattern enabling a pre-trained language model to ascertain a specific meaning of the new words. (Zhang para 102 “the system fine-tunes the network based on the second contrastive learning loss and the intent classification loss.” The intent classification loss comes from the pretrained intent classifier being fed new masked encoded text, see para 101 in Zhang, “the system computes an intent classification loss. In some cases, the operations of this step refer to, or may be performed by, a fine-tuning component…”)
Zhang teaches claims 7 and 14. The method of claim 1, wherein the performing, processing, and updating steps are carried out without feature engineering by a human expert. (Zhang does not mention feature engineering, so this claim element is taught.)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Austin Hicks whose telephone number is (571)270-3377. The examiner can normally be reached Monday - Thursday 8-4 PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AUSTIN HICKS/Primary Examiner, Art Unit 2124