DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The instant application is a continuation of App. No. 17/507,808, filed on October 22, 2021, which issued as U.S. Patent No. 11,656,851 on May 23, 2023.
Election/Restrictions
Applicant’s election without traverse of claims 1-8 in the reply filed on July 24, 2025, is acknowledged.
Claims 1-20 are pending, of which claims 1-8 are original and 9-20 are withdrawn. Claims 1-8 have been fully considered by Examiner.
Examiner Notes
Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Claim Objections
Claims 6-8 are objected to because of the following informalities:
With respect to claim 6, lines 1-2 recite, with emphasis added, “wherein the pre-configured order priorities global import statements over other syntax elements.” This appears to be a typographical error that should recite “wherein the pre-configured order prioritizes global import statements over other syntax elements.”
With respect to claim 7, lines 1-2 recite, with emphasis added, “wherein the pre-configured order priorities assigned values over other syntax elements than global import statements.” This appears to be a typographical error that should recite “wherein the pre-configured order prioritizes assigned values over other syntax elements than global import statements.”
With respect to claim 8, lines 1-2 recite, with emphasis added, “wherein the pre-configured order priorities class attributes over other syntax elements than global import statements and assigned values.” This appears to be a typographical error that should recite “wherein the pre-configured order prioritizes class attributes over other syntax elements than global import statements and assigned values.”
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
With respect to claim 1, the last line recites, with emphasis added, “obtain from a deep learning model source code based on the data of the rolling window.” While the claim previously recites “a rolling window,” it does not previously recite “data of the rolling window.” The claim also recites on lines 8-10, “fill a first one of the plurality of context windows of the rolling window with select ones of the first plurality of syntax elements and a second one of the plurality of context windows of the rolling window with select ones of the second plurality of syntax elements.” It is unclear if “the data of the rolling window” means “the select ones of the first plurality of syntax elements of the first one of the plurality of context windows of the rolling window,” “the select ones of the second plurality of syntax elements of the second one of the plurality of context windows of the rolling window,” both of these, or some other “data of the rolling window.” The scope of the claim is therefore indefinite. For purposes of compact prosecution only and consistent with Applicant’s specification1, Examiner has interpreted “the data of the rolling window” as meaning “the select ones of the first plurality of syntax elements of the first one of the plurality of context windows of the rolling window and the select ones of the second plurality of syntax elements of the second one of the plurality of context windows of the rolling window.”
With respect to claims 2-8, each inherits the 35 USC 112(b) deficiency of claim 1 identified above.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-8 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 2, 4, 5, and 8 of U.S. Patent No. 11,656,851 in view of Clement et al. “Long-Range Modeling of Source Code Files with eWASH: Extended Window Access by Syntax Hierarchy” (hereinafter Clement)2. The claims of the instant application and the claims of the reference are compared in the following table:
Instant Application
Reference Patent No. 11,656,851
1. A system comprising: one or more processors; and a memory that stores one or more programs that are configured to be executed by the one or more processors, the one or more programs including instructions to perform actions that:
extract a first plurality of syntax elements from a source code program to represent a focal method and a second plurality of syntax elements to represent a context of the focal method;
the first plurality of syntax elements and with select ones of the second plurality of syntax elements, wherein the select ones of the second plurality of syntax elements are selected based on a pre-configured order; and
.
1. A system comprising: one or more processors; and a memory that stores one or more programs that are configured to be executed by the one or more processors, the one or more programs including instructions to perform actions that:
extract a first plurality of syntax elements from a source code program to represent a focal method and a second plurality of syntax elements to represent a context of the focal method;
generate a fixed-size context window from the first plurality of syntax elements and the second plurality of syntax elements, wherein the fixed-size context window includes a first portion and a second portion,
wherein the first portion includes the first plurality of syntax elements representing the focal method, wherein the second portion includes select ones of the second plurality of syntax elements representing the context of the focal method, wherein the select ones of the second plurality of syntax elements are selected based on a priority order, wherein the second plurality of syntax elements representing the context of the focal method have another scope than a local scope of the focal method;
apply the fixed-size context window to train a deep learning model to learn to predict source code based on data of the fixed-size context window.
2. The system of claim 1, wherein the pre-configured order includes a hierarchical list of syntax elements.
2. The system of claim 1, wherein the priority order includes a pre-configured list of syntax elements in a hierarchical order.
3. The system of claim 1, wherein the second plurality of syntax elements have a scope that differs from a local scope of the focal method.
1. … wherein the second plurality of syntax elements representing the context of the focal method have another scope than a local scope of the focal method …
4. The system of claim 1, wherein the first plurality of syntax elements includes a method signature of the focal method, a method docstring of the focal method, and/or a class name of the focal method.
4. The system of claim 1, wherein the first plurality of syntax elements includes a method signature of the focal method, a method docstring of the focal method, and a class name of the focal method.
5. The system of claim 1, wherein the second plurality of syntax elements include a global import statement, a method signature of a peer method of a class of the focal method, a docstring of a class of the method signature, a global expression, and/or a method body of a method of the class of the focal method.
5. The system of claim 1, wherein the second plurality of syntax elements include a global import statement, a method signature of a peer method of a class of the focal method, a docstring of a class of the method signature, a global expression, and/or a method body of a method of the class of the focal method.
6. The system of claim 1, wherein the pre-configured order priorities global import statements over other syntax elements.
8. The system of claim 1, wherein the priority order for extracting the second plurality of syntax elements representing the context includes: (1) global import statements; (2) assigned values; (3) class attributes; (4) peer class method signatures; (5) class docstrings; (6) peer class method docstrings; (7) global expressions; and (8) source code bodies of peer class methods of the focal method.
7. The system of claim 1, wherein the pre-configured order priorities assigned values over other syntax elements than global import statements.
8. The system of claim 1, wherein the priority order for extracting the second plurality of syntax elements representing the context includes: (1) global import statements; (2) assigned values; (3) class attributes; (4) peer class method signatures; (5) class docstrings; (6) peer class method docstrings; (7) global expressions; and (8) source code bodies of peer class methods of the focal method.
8. The system of claim 1, wherein the pre-configured order priorities class attributes over other syntax elements than global import statements and assigned values.
8. The system of claim 1, wherein the priority order for extracting the second plurality of syntax elements representing the context includes: (1) global import statements; (2) assigned values; (3) class attributes; (4) peer class method signatures; (5) class docstrings; (6) peer class method docstrings; (7) global expressions; and (8) source code bodies of peer class methods of the focal method.
With respect to claim 1, As illustrated in the table above, all of the limitations are disclosed in claim 1 of the reference patent except for generate a rolling window comprising a plurality of context windows … fill a first one of the plurality of context windows of the rolling window with select ones of … a second one of the plurality of context windows of the rolling window … obtain from a deep learning model source code based on the data of the rolling window. However, this is taught in analogous art, Clement (e.g., Figs. 1-2 on pp. 3-4 and associated text, e.g., p. 3, right col., top para. – p. 4, left col., top para., descending the priority list, taking elements until the context window has been filled … For code completion, as we use an autoregressive decoder in the form of XGPT-C [deep learning model] there is no special ‘position,’ and so we create a rolling window across the focal method body … In the case of a method which exceeds 256 tokens, the training sample for that method is decomposed into multiple ‘windows’; p. 2, § 2, 1st para., Code completion is the auto-regressive prediction of one or more tokens conditioned on a provided context.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of the reference patent with the invention of Clement, such that code generation uses multiple context windows for long source code method contexts, because allowing the model to consider more context information could result in more accurate code completion.
With respect to claim 2, As illustrated in the table above, all of the limitations are disclosed in claim 2 of the reference patent.
With respect to claim 3, As illustrated in the table above, all of the limitations are disclosed in claim 1 of the reference patent.
With respect to claim 4, As illustrated in the table above, all of the limitations are disclosed in claim 4 of the reference patent.
With respect to claim 5, As illustrated in the table above, all of the limitations are disclosed in claim 5 of the reference patent.
With respect to claims 6-8, As illustrated in the table above, all of the limitations are disclosed in claim 8 of the reference patent.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-8 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Clement et al. “Long-Range Modeling of Source Code Files with eWASH: Extended Window Access by Syntax Hierarchy” (hereinafter Clement)3.
With respect to claim 1, Clement discloses A system comprising: one or more processors; and a memory that stores one or more programs that are configured to be executed by the one or more processors, the one or more programs including instructions to perform actions that (e.g., Figs. 3-4 on p. 7 and associated text, e.g., p. 7, § 7.2.1 Code Completion Evaluation Results, As shown in Table 1, eWASH allows XGPT-C to beat both the GPT-C baseline and the memory efficient transformers on all the metrics computed; Examiner notes that the claimed system, processors, and memory are required for the disclosed completion evaluation.):
extract a first plurality of syntax elements from a source code program to represent a focal method and a second plurality of syntax elements to represent a context of the focal method (e.g., Figs. 1-2 on pp. 3-4 and associated text, e.g., Abstract, Using concrete syntax trees of each source file we extract syntactic hierarchies; p. 3, left col., top para. - right col., top para., we center method bodies as the focus of the modeling task for eWASH, calling each method being modeled the ‘focal method.’ Figure 2 shows how eWASH uses syntax hierarchies to prioritize elements … The most important part for modeling the body of this focal method is its signature and docstring (if present) and containing class definition (if the focal method is a class method) [first plurality of syntax elements from a source code program to represent a focal method]. After this we prioritize global import statements and assigned values (but not yet the assigned expressions), followed by class attributes, peer class method signatures, class docstring, peer class method docstrings, and finally global expressions and the code bodies of peer class methods [a second plurality of syntax elements to represent a context of the focal method]. In practice, eWASH is implemented by taking the concrete syntax tree of the source file and organizing the syntactic elements in our priority list, tokenizing each element.);
generate a rolling window comprising a plurality of context windows (e.g., Figs. 1-2 on pp. 3-4 and associated text, e.g., p. 3, right col., last para. – p. 4, left col., top para., we create a rolling window across the focal method body … In the case of a method which exceeds 256 tokens, the training sample for that method is decomposed into multiple ‘windows’.);
fill a first one of the plurality of context windows of the rolling window with select ones of the first plurality of syntax elements (e.g., Figs. 1-2 on pp. 3-4 and associated text, e.g., p. 3, right col., top para. – p. 4, left col., top para., descending the priority list, taking elements until the context window has been filled … We reserve … 1/4 (256/1024 tokens) for the rolling window of the body [select ones of the first plurality of syntax elements]. In the case of a method which exceeds 256 tokens, the training sample for that method is decomposed into multiple ‘windows’.) and a second one of the plurality of context windows of the rolling window with select ones of the second plurality of syntax elements, wherein the select ones of the second plurality of syntax elements are selected based on a pre-configured order (e.g., Figs. 1-2 on pp. 3-4 and associated text, e.g., p. 3, right col., top para. - p. 4, left col., top para., descending the priority list, taking elements until the context window has been filled … We reserve 3/4 (768/1024 tokens) of the tokens for the context [select ones of the second plurality of syntax elements] … In the case of a method which exceeds 256 tokens, the training sample for that method is decomposed into multiple ‘windows’ … Fig. 2: … selectively fills the model context going down the file, in the order of priority level indicated, and stops when the token budget of the model context is filled.); and
obtain from a deep learning model source code based on the data of the rolling window (e.g., Fig. 1 on p. 3, particularly XGPT-C Code Completion, and associated text, e.g., p. 2, § 2, 1st para., Code completion is the auto-regressive prediction of one or more tokens conditioned on a provided context; p. 3, right column, last para. – p. 4, left col, top para., For code completion, as we use an autoregressive decoder in the form of XGPT-C [deep learning model] there is no special ‘position,’ and so we create a rolling window across the focal method body.).
With respect to claim 2, Clement also discloses wherein the pre-configured order includes a hierarchical list of syntax elements (e.g., Figs. 1-2 on pp. 3-4 and associated text, e.g., p. 3, left col., 1st full para., Figure 2 shows how eWASH uses syntax hierarchies to prioritize elements of the context … After this we prioritize global import statements and assigned values (but not yet the assigned expressions), followed by class attributes, peer class method signatures, class docstring, peer class method docstrings, and finally global expressions and the code bodies of peer class methods of the method and code completion example of Fig. 1.).
With respect to claim 3, Clement also discloses wherein the second plurality of syntax elements have a scope that differs from a local scope of the focal method (e.g., Figs. 1-2 on pp. 3-4 and associated text, e.g., pp. 2-3, § 2.1., Software developers carefully organize their code into scopes and portable elements using hierarchies like methods and classes ... The most important part for modeling the body of this focal method is its signature and docstring (if present) and containing class definition (if the focal method is a class method). After this we prioritize global import statements and assigned values (but not yet the assigned expressions), followed by class attributes, peer class method signatures, class docstring, peer class method docstrings, and finally global expressions and the code bodies of peer class methods.).
With respect to claim 4, Clement also discloses wherein the first plurality of syntax elements includes a method signature of the focal method, a method docstring of the focal method, and/or a class name of the focal method (e.g., Figs. 1-2 on pp. 3-4 and associated text, e.g., p. 3, left col., 1st full para., The most important part for modeling the body of this focal method is its signature and docstring (if present) and containing class definition (if the focal method is a class method).).
With respect to claim 5, Clement also discloses wherein the second plurality of syntax elements include a global import statement, a method signature of a peer method of a class of the focal method, a docstring of a class of the method signature, a global expression, and/or a method body of a method of the class of the focal method (e.g., Figs. 1-2 on pp. 3-4 and associated text, e.g., p. 3, left col., 1st full para., After this we prioritize global import statements and assigned values (but not yet the assigned expressions), followed by class attributes, peer class method signatures, class docstring, peer class method docstrings, and finally global expressions and the code bodies of peer class methods.).
With respect to claim 6, Clement also discloses wherein the pre-configured order priorities global import statements over other syntax elements (e.g., Figs. 1-2 on pp. 3-4 and associated text, e.g., p. 3, left col., 1st full para., we prioritize global import statements and assigned values (but not yet the assigned expressions), followed by class attributes, peer class method signatures, class docstring, peer class method docstrings, and finally global expressions and the code bodies of peer class methods.).
With respect to claim 7, Clement also discloses wherein the pre-configured order priorities assigned values over other syntax elements than global import statements (e.g., Figs. 1-2 on pp. 3-4 and associated text, e.g., p. 3, left col., 1st full para., we prioritize global import statements and assigned values (but not yet the assigned expressions), followed by class attributes, peer class method signatures, class docstring, peer class method docstrings, and finally global expressions and the code bodies of peer class methods.).
With respect to claim 8, Clement also discloses wherein the pre-configured order priorities class attributes over other syntax elements than global import statements and assigned values (e.g., Figs. 1-2 on pp. 3-4 and associated text, e.g., p. 3, left col., 1st full para., we prioritize global import statements and assigned values (but not yet the assigned expressions), followed by class attributes, peer class method signatures, class docstring, peer class method docstrings, and finally global expressions and the code bodies of peer class methods.).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Specifically, Svyatkovskiy et al. US 20200249918 teaches using a deep learning model to complete a method invocation by extracting features representing syntactic context; Kothuvatiparambil et al. US 20200051552 teaches using a fixed-size window to slide across a token sequence; Banuelos et al. US 20190227774 teaches performing source code completion by reading source code and generating a corresponding syntax tree and semantic model; Fu et al. US 20190303108 teaches using machine learning models to predict a method invocation more likely to follow a sequence of method invocations for a custom class and for an overlapping class; Lichtenau et al. US 20220405050 teaches a sliding window that is configured to move over an input tensor of a function to produce an output tensor; Beltagy et al., “Longformer: The Long-Document Transformer” teaches building contextual representations of the entire context using multiple layers of attention; Svyatkovskiy et al., “IntelliCode Compose: Code Generation Using Transformer” teaches using a generative transformer to perform code completion by predicting sequences of code tokens; Haque et al., “Improved Automatic Summarization of Subroutines via Attention to File Context” teaches generating code summaries based on function-level context and file-level context; Hussain et al., “CodeGRU: Context-aware Deep Learning with Gated Recurrent Unit for Source Code Modeling” teaches source code completion based on a source code language model that captures the source code's contextual, syntactical and structural dependencies; Svyatkovskiy et al., “Pythia: AI-assisted Code Completion System” teaches code completion based using deep learning models trained on code contexts extracted from abstract syntax trees.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEPHEN DAVID BERMAN whose telephone number is (571)272-7206. The examiner can normally be reached on M-F, 9-6 Eastern.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hyung S. Sough can be reached on 571-272-6799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STEPHEN D BERMAN/Examiner, Art Unit 2192
1 See [0077], “The data in the context window is used to train a neural transformer model and is used by the trained neural transformer model to generate source code … The data consists of various sequences of tokens representing syntax elements.”; [0085], “For a code completion task, there is a rolling window across the focal method body. The rolling window may include multiple context windows.”; [0107], “Fig. 6B illustrates an exemplary method 620 for generating a context window based on a syntax hierarchy for the model to use to generate predicted source code …The sequence of tokens is tokenized into subtokens and filled into one or more context windows … The context windows are then applied to the neural transformer with attention model...”
2 Although Clement et al. “Long-Range Modeling of Source Code Files with eWASH: Extended Window Access by Syntax Hierarchy” was published less than 1 year before the effective filing date of the instant application and names the instant application’s inventors as authors of the publication, it still qualifies as prior art because it has four additional authors who are not inventors of the instant application. See MPEP § 2153.01(a) Grace Period Inventor-Originated Disclosure Exception, “If, however, the application names fewer joint inventors than a publication (e.g., the application names as joint inventors A and B, and the publication names as authors A, B and C), it would not be readily apparent from the publication that it is an inventor-originated disclosure and the publication would be treated as prior art under AIA 35 U.S.C. 102(a)(1) ….”
3 Although Clement et al. “Long-Range Modeling of Source Code Files with eWASH: Extended Window Access by Syntax Hierarchy” was published less than 1 year before the effective filing date of the instant application and names the instant application’s inventors as authors of the publication, it still qualifies as prior art because it has four additional authors who are not inventors of the instant application. See MPEP § 2153.01(a) Grace Period Inventor-Originated Disclosure Exception, “If, however, the application names fewer joint inventors than a publication (e.g., the application names as joint inventors A and B, and the publication names as authors A, B and C), it would not be readily apparent from the publication that it is an inventor-originated disclosure and the publication would be treated as prior art under AIA 35 U.S.C. 102(a)(1) ….”