Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is response to the communication filed on June 19, 2023. Claims 1-16 are pending.
Information Disclosure Statement
The IDS file on August 29, 2023 is not accepted as the submitted documents are not readable. Applicant is required to resubmit those documents in readable version.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding the claim 1, it recites pre-training a deep learning model on an unsupervised training dataset of natural language text; pre-training the deep learning model on an unsupervised training dataset of source code snippets; fine-tuning the deep learning model on a supervised training dataset, wherein the supervised training dataset includes a plurality of tuples, wherein a tuple includes a method signature of a focal method and a plurality of test cases for the focal method; and deploying the deep learning model to generate a method body for a target method given at least one test case for the target method.
The limitations pre-training (both) and fine-tuning as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. User can mentally do these steps by thing in mind. If necessary, user can user physical aid such paper and pen. Hence, the limitations are a mental process. See MPEP 2106.04(a)(2) III, B, If a claim recites a limitation that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper, the limitation falls within the mental processes grouping, and the claim recites an abstract idea. See, e.g., Benson, 409 U.S. at 67, 65, 175 USPQ at 674-75, 674 (noting that the claimed "conversion of [binary-coded decimal] numerals to pure binary numerals can be done mentally," i.e., "as a person would do it by head and hand.").
The claim recites one additional element: deploying the deep learning model to generate a method body for a target method given at least one test case for the target method. The deploying step as recited can be done with generic computer component. Hence, deploying step is an insignificant extra-solution activity. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to the abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of deploying step amounts to no more than mere instructions to apply the exception using a generic computer component. The courts have recognized these functions as well‐understood, routine, and conventional as they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (see MPEP 2106.05(d) II, Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information)). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Claim 2 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 2 recites the same abstract idea of neural transformer model of software development. The claim recites the limitations of obtaining a first plurality of source code snippets; and generating the unsupervised training dataset of source code snippets by applying a denoising function to each training sample of the first plurality of source code snippets, which can be done mentally with or without the use of a physical aid (e.g., pen and paper) or with a generic computer in the form of insignificant extra-solution activity which is not an inventive concept that meaningfully limits the abstract idea. Therefore, the limitation is a mental process.
Claim 3 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 3 recites the same abstract idea of neural transformer model of software development. The claim recites the limitations of obtaining a first plurality of natural language text; and generating the unsupervised training dataset of natural language text by applying a denoising function to each sample of the first plurality of natural language text, which can be done mentally with or without the use of a physical aid (e.g., pen and paper) or with a generic computer in the form of insignificant extra-solution activity which is not an inventive concept that meaningfully limits the abstract idea. Therefore, the limitation is a mental process.
Claim 4 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 4 recites the same abstract idea of neural transformer model of software development. The claim recites the limitations of wherein the deep learning model is deployed in an integrated development environment (IDE), which can be done mentally with or without the use of a physical aid (e.g., pen and paper) or with a generic computer in the form of insignificant extra-solution activity which is not an inventive concept that meaningfully limits the abstract idea. Therefore, the limitation is a mental process.
Claim 5 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 5 recites the same abstract idea of neural transformer model of software development. The claim recites the limitations of wherein the deep learning model is deployed in a source code editor, which can be done mentally with or without the use of a physical aid (e.g., pen and paper) or with a generic computer in the form of insignificant extra-solution activity which is not an inventive concept that meaningfully limits the abstract idea. Therefore, the limitation is a mental process.
Claim 6 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 6 recites the same abstract idea of neural transformer model of software development. The claim recites the limitations of wherein the deep learning model is deployed as a web service that generates the method body for the target method when given the at least one test case for the target method, which can be done mentally with or without the use of a physical aid (e.g., pen and paper) or with a generic computer in the form of insignificant extra-solution activity which is not an inventive concept that meaningfully limits the abstract idea. Therefore, the limitation is a mental process.
Claim 7 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 7 recites the same abstract idea of neural transformer model of software development. The claim recites the limitations of wherein the deep learning model is a neural transformer with attention, which can be done mentally with or without the use of a physical aid (e.g., pen and paper) or with a generic computer in the form of insignificant extra-solution activity which is not an inventive concept that meaningfully limits the abstract idea. Therefore, the limitation is a mental process.
Claim 8 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 8 recites the same abstract idea of neural transformer model of software development. The claim recites the limitations of wherein the deep learning model is a neural transformer with attention in an encoder-decoder configuration, which can be done mentally with or without the use of a physical aid (e.g., pen and paper) or with a generic computer in the form of insignificant extra-solution activity which is not an inventive concept that meaningfully limits the abstract idea. Therefore, the limitation is a mental process.
As to claim 9, it has similar limitations as of claim 1 above. Hence, claim 9 is rejected under the same rational as of claim 1 above.
Claim 10 is dependent on claim 9 and includes all the limitations of claim . Therefore, claim 10 recites the same abstract idea of neural transformer model of software development. The claim recites the limitations of wherein the program includes instructions to perform actions that: create the unsupervised training dataset of natural language text through application of a denoising function to each natural language training sample of the unsupervised training dataset of natural language text, which can be done mentally with or without the use of a physical aid (e.g., pen and paper) or with a generic computer in the form of insignificant extra-solution activity which is not an inventive concept that meaningfully limits the abstract idea. Therefore, the limitation is a mental process.
Claim 11 is dependent on claim 9 and includes all the limitations of claim 9. Therefore, claim 11 recites the same abstract idea of neural transformer model of software development. The claim recites the limitations of wherein the program includes instructions to perform actions that: create the unsupervised training dataset of source code snippets through application of a denoising function to each source code snippet of the unsupervised training dataset of source code snippets, which can be done mentally with or without the use of a physical aid (e.g., pen and paper) or with a generic computer in the form of insignificant extra-solution activity which is not an inventive concept that meaningfully limits the abstract idea. Therefore, the limitation is a mental process.
As to claims 12-16, they have similar limitations as of claims 4-8 above. Hence, they are rejected under the same rational as of claims 4-8 above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-16 are rejected under 35 U.S.C. 103 as being unpatentable over Singh et al. (Patent No. : US 11693637 B1) in the view of Kurata et al. (Pub. No: US 20170061330 A1).
As to clam 1 Singh teaches a computer-implemented method, comprising:
pre-training a deep learning model on an unsupervised training dataset of natural language text (Column 9 lines 11-12: training instance input that includes a natural language description of a source code snippet in PL1);
pre-training the deep learning model on an unsupervised training dataset of source code snippets (column 9 lines 3-4: training instance input that includes a source code snippet in PL1);
tuning the deep learning model on a supervised training dataset, wherein the supervised training dataset includes a plurality of tuples, wherein a tuple includes a method signature of a focal method and a plurality of test cases for the focal method (column 12 lines 57-67: The augmentation engine 122 can be used to generate additional training instances that each include a modification of the source code snippet obtained from a repository).
Singh does not explicitly disclose but Kurata teaches the tuning being a fine-tuning (paragraph [0045]: a supervised fine-tuning is performed in deep learning architectures) and deploying the deep learning model to generate a method body for a target method given at least one test case for the target method (paragraph [0103]: the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Singh by adding above limitation as taught by Kurata to improve the deep leaning training model.
As to claim 2 Singh together with Kurata teache a computer-implemented method according to claim 1. Singh teaches obtaining a first plurality of source code snippets and generating the unsupervised training dataset of source code snippets by applying a denoising function to each training sample of the first plurality of source code snippets (column 9 lines 1-26).
As to claim 3 Singh together with Kurata teache a computer-implemented method according to claim 1. Singh teaches obtaining a first plurality of natural language text; and generating the unsupervised training dataset of natural language text by applying a denoising function to each sample of the first plurality of natural language text (column 9 lines 1-26).
As to claim 4 Singh together with Kurata teache a computer-implemented method according to claim 1. Singh teaches wherein the deep learning model is deployed in an integrated development environment (IDE) (column 8 lines 1-51).
As to claim 5 Singh together with Kurata teache a computer-implemented method according to claim 1. Singh teaches wherein the deep learning model is deployed in a source code editor (column 5 lines 27-29).
As to claim 6 Singh together with Kurata teache a computer-implemented method according to claim 1. Singh teaches wherein the deep learning model is deployed as a web service that generates the method body for the target method when given the at least one test case for the target method (column 8 lines 49-52).
As to claim 7 Singh together with Kurata teache a computer-implemented method according to claim 1. Singh teaches wherein the deep learning model is a neural transformer with attention (column 9 lines 27-30).
As to claim 8 Singh together with Kurata teache a computer-implemented method according to claim 1. Singh teaches wherein the deep learning model is a neural transformer with attention in an encoder-decoder configuration (column 9 lines 27-50).
As to claim 9, it has similar limitations as of claim 1 above. Hence, claim 9 is rejected under the same rational as of claim 1 above.
As to claim 10 Singh together with Kurata teache a system according to claim 9. Singh teaches wherein the program includes instructions to perform actions that: create the unsupervised training dataset of natural language text through application of a denoising function to each natural language training sample of the unsupervised training dataset of natural language text (column 9 lines 27-50).
As to claim 11 Singh together with Kurata teache a system according to claim 9. Singh teaches wherein the program includes instructions to perform actions that: create the unsupervised training dataset of source code snippets through application of a denoising function to each source code snippet of the unsupervised training dataset of source code snippets (column 9 line 61 to column 10 line 20).
As to claim 12 Singh together with Kurata teache a system according to claim 9. Singh teaches wherein the deep learning model is deployed in an integrated development environment (IDE) (column 8 lines 35-52).
As to claims 12-16, they have similar limitations as of claims 4-8 above. Hence, they are rejected under the same rational as of claims 4-8 above.
Examiner's Note: Examiner has cited particular columns and line numbers or paragraphs in the references as applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in its entirety as potentially teaching of all or part of the claimed invention, as well as the context.
Conclusion
The prior art made of record, listed on form PTO-892, and not relied upon, if any, is considered pertinent to applicant's disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MD I UDDIN whose telephone number is (571)270-3559. The examiner can normally be reached M-F, 8:00 am to 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sherief Badawi can be reached at 571-272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MD I UDDIN/Primary Examiner, Art Unit 2169