DETAILED ACTION
This action is responsive to the Application filed 1/08/2024.
Accordingly, claims 1-17 are submitted for prosecution on merits.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1 rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim(s) 1 is/are directed to Abstract Idea. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because of the following 2 step Analysis.
Step I:
The claim 1 is directed to a method category of subject matter.
Step II, A:
The steps recited as “determining an operation”, “determining one or more parameters”, “identifying a template”, “populating the template with the operation and … the parameters” without further details can be construed as activities that can be performed mentally by a human process and as the elements such as “determining” and “filling” a “template” are processes that can be carried out and retained inside a human mind; e.g. forming a mental image and filling it with abstracted parametric conceptions. That is, the above steps fall into an Abstract Idea scenario in which information received from a (computer) interface are deduced, analyzed and re-arranged mentally and therefore, are not susceptible to integrate the “method” into a Practical Application; thus, the claim is directed to a Judicial Exception of an Abstract Idea type. See MPEP § 2106.04(a)(2)
Step II, B:
The additional elements identified with this method claim include (i) “receiving a statement in natural language” , (ii) “(determining) by a neural network, and (iii) “generate the source code in the target domain-specific language”.
As for (i), this step amounts to receiving information via use of a generic computer for analysis or deduction to be performed, and this can be viewed as insignificant and/or well-known activity that does not bring forth any extra-solution or unconventional transformation to the Judicial exception of step 2A – see MPEP 2106.05 (a)-(g).
As for (ii), the use in modern natural language processing (NLP) of a neural network intelligence (NN) to perform extraction, capture of semantic relationships and contextual grouping among lexical elements of a natural language text is considered a known routine that automates a rather large scale numerical technique over non-computerized/conventional methods that rely on written rules to equally learn of meaning and relational context of words; therefore, determination by a neural network (as recited) as a means for natural language processing cannot bring forth a novelty or clear improvement to the process of “determining” or ”identifying” steps since NL processing itself is a well-known, understood technique. Thus, the additional element (ii) cannot make the Judicial Exception of step 2A to amount to much more than an Abstract Idea deficiency, absent any details on how “use” of neural network (emphasis here) would particularly proffer detailed, significant ways to improve over this well-known technique of extraction, grouping, capture of relationships and identifying semantic context of NL elements – see MPEP § 2106.05(d).
As for (iii), the claim language does not provide details indicative of how use of templatized information in the generating of source code integrate significant improvement to the existing fields of code generating, i.e. lack of sufficient details to demonstrate that software creation brings forth a clear inventive arrangement or solution to a particular industrial field, a special type of application or HW. Hence the element (iii) amounts to a mere step of providing generic source code as post-activity to the step of “determining” identified in step 2A; thus, element (iii) fails to upconvert the method of claim 1 to make it amount to much more than a Judicial Exception – see MPEP 2106.05(c). That is, the method of claim 1 cannot be integrated into a Practical Application even with integration and consideration of the additional elements per step2B.
Claim 9 is rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim(s) 1 is/are directed to Abstract Idea. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because of the following 2 step Analysis
Step I: the claim recites a system claim as a category.
Step II, A:
The claim recites step of “determine” (an operation), “determine” (one or more parameters), “identify” (a template of domain-specific language), “populate the template with the operation and … the parameters”. Absent further details demonstrating that these steps are implemented with special machine or algorithms, the system claim as a whole is directed to a mental process by a human performing acts of determining and identifying (based analyzing natural language input) and filling a mental template using result from the determining retained inside a human mind. The mental process steps of claim 8 are thus not susceptible to integrate the system claim into a Practical Application; thus, the claim is directed to a Judicial Exception of an Abstract Idea type. See MPEP § 2106.04(a)(2)
Step II, B:
The additional elements of claim 9, comprise (i) “receive a statement in natural language”, (ii)“as to generate the source code in the target domain-specific language” and (iii) “template in a target domain-specific language”. There is no detail to the domain-specificity as in (iii) to clearly demonstrate that a particular domain is being technically improved with use of a “template”. There are no sufficient details showing that “receiving” as in (i) of a statement in natural language actually defines use of a non-conventional machine or special technique actually improves upon standard manner to receive text; hence additional element (i) is construed as a well-known routine using a generic interface (computer) to receive/acquire text data – see MPEP 2106.05 (a), (b), (d). For element (ii), the “generating of source code” is construed as code created in generic terms and/or as intended use, without details about a software-based transformation of a technology, a SW application, or an industrial field into an improved instance thereof, and therefore cannot be seen as capable of upconverting the system of claim 8 so to make it amount to much more than a Judicial Exception – see MPEP 2106.05(c).
Therefore, system claim 9 is rejected as a Judicial Exception bearing a Abstract Idea deficiency and ineligibility of a subject matter that fails to be integrated into a practical application.
Analysis of dependent claims.
Claim 2 recites domain specific language as set of operations, and operation determined by the NN as one of the set of operations fail to add substantial improvement to the mental process identified per step IIA.
Claim 3 recites the neural network for determining respective probabilities of classes corresponding to a respective operation; but the probabilities of classes are not tied with the identification of a particular operation or template, hence cannot be viewed as a way to improve upon the step of determining an operation in the direction of populating a template, as part of the Judicial Exception of IIA.
Claim 4 recites training via a NN with data comprising real-world text in NL and this can be viewed as a well-understood routine as set forth above – see MPEP 2106.05 a-b, d – which fails to provide an improvement to the template-based source code developing (as claimed).
Claim 5 recites generating synthetic data from the text statements to yield new statements, and use of synthesis analysis in association with received NL statement can be viewed as well-known routine, thus fails to demonstrate a clear improvement into any field of processing text.
Claim 6, recites replacing words with new words so to define synonyms; but this finding of new words fails to particularly make the template identifying and/or populating a significant improvement to the NL processing and/or the source code generating; nor does it render the mental process from step IIA so to make it amount to significantly more than the Judicial Exception.
Claim 7 recites rearranging order of words and defining new order of words; but as a whole, the rearranging fails to render the mental process of claim 1 so to make it significantly more than the Judicial Exception
Claim 8 recites that neural network is also trained on the synthetic data, but this disconnected limitation from the identifying and determining steps of claim 1, make it impossible to see if this training would bring a novelty or improvement to field of template identification and determination observed in claim 1.
Claims 10-16 are repeat of claims 2-8, hence are deemed insufficient to make claim 9 for it to amount much more than a Judicial Exception.
Claim 17 recites a medium version of claim 1, hence incorporates by default the Judicial Exception of this claim.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-2, 4, 9-10, 17 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Gutta et al, USPubN: 2023/0038529 (herein Gutta).
As per claim 1, Gutta discloses a computer-implemented method of generating source code in a target domain-specific language, the method comprising:
receiving a statement (text documents, text section of the natural language document – para 0005) written in natural language text (natural language document – para 0039; natural language document – para 0018);
determining, by a neural network (set of neural networks – para 0005; AI learning models, machine learning models, employs a neural network with multiple layers – para 0022; a set of deep neural network models – para 0017), a operation (indicating an entity, a protocol operation (“action”) type associated with the code template – para 0017; a smart contract may include execution or “calling” – para 0025 – Note1: processing text statement using set of neural networks to identify values or entity such as a “action” - protocol operation: step 416 – Fig. 4 - as well as set of numeric values to populate a corresponding template targeted to generate program code for smart contract “calling” within a blockchain network reads on determining by a NN an operation and associated set of parameters with which to generate a program language expressed as bytecode specific to the smart-contract interactions and blockchain NW domain – see para 0035) intended by the statement (see natural language text section from above);
based on the operation (, a protocol operation (“action”) – para 0017), determining one or more parameters (code template … may be selected … and populated based on numeric values … for example an entity identifier, a value of a conditional statement, a first date, a second date … a first set of numeric values – para 0005) that correspond to the operation (protocol operation (“action”) type associated with the code template – para 0017; step 416 – Fig. 4);
based on the operation, identifying a template (associated with the code template – para 0017; a code template … may be selected – para 0005) in the target domain-specific language (program code that aggregates transactions between … smart contract hosted on a blockchain network – para 0066; bytecode version of program code to a blockchain network or distributed ledger – para 0067; bytecode version – para 0035; generated bytecode on a blockchain network – para 0017); and
populating the template with the operation and the one or more parameters (be selected … and populated based on numeric values … for example an entity identifier, a value of a conditional statement, a first date, a second date … a first set of numeric values - para 0005), so as to generate the source code (generating associated program code based on n-grams and first set of numeric values … based on the document – para 0005; populate fields of the template to generate program code to generate program code … compile program code into bytecode – para 0017) in the target domain-specific language.
As per claim 2, Gutta discloses computer-implemented method as recited claim 1, wherein the target domain-specific language (refer to Note1 from above) defines a set of operations (smart contract may include execution or “calling” or protocol action – para 0025, 0017), and the operation determined by the neural network is one of the operations (a protocol operation (“action”) type associated with the code template – para 0017; step 416 – Fig. 4) in the set of operations (refer to calling instances in a smart-contract and blockchain network from Note1).
As per claim 4, Gutta discloses computer-implemented. method as recited in claim 1, the method further comprising:
training the neural network on training data associated with the target domain-specific language (bytecode specific to the smart-contract interactions and blockchain NW domain – see para 0035), the training data comprising real-world text statements written in natural language (natural language document – para 0039; natural language document – para 0018; para 0005).
As per claim 9, Gutta discloses a computing system configured to generate source code in a plurality of domain-specific languages, the computing system comprising: one or more processors; and a memory storing instructions that, when executed by the one or more processors, cause the computing system to:
receive a statement written in natural language text;
determine an operation intended by the statement;
based on the operation, determine one or more parameters that correspond to the operation;
based on the operation, identify a template in a target domain-specific language of the plurality of domain-specific languages; and
populate the template with the operation and the one or more parameters, so as to generate the source code in the target domain-specific language.
(all of which having been addressed in claim 1)
As per claim 10, refer to rejection of claim 2.
As per claim 17, Gutta discloses a non-transitory computer-readable storage medium including instructions that, when processed by a computing system, configure the computing system to perform the method according to claim 1.
(All of which having been addressed in claim 1)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3, 11 is/are rejected under § 35 U.S.C. 103 as being unpatentable over Gutta et al, USPubN: 2023/0038529 (herein Gutta) in view of Xu et al, USPubN: 2022/0171947 (herein Xu)
As per claim 3, Gutta does not explicitly disclose computer-implemented method as recited in claim 2, wherein determining the operation intended by the statement further comprises:
determining, by the neural network, respective probabilities associated with a plurality of classes, each class in the plurality of classes corresponding to a respective operation in the set of operations.
Xu discloses prediction models (para 0167) such as DNN, CNN for natural language processing, the models to train sentences from utterances or text input, then representing the word level or n-grams as topic vector of the respective text/NLP features (para 0139-0141), the training output including probability of class whose values are evaluated each time in determining whether a given utterance portion corresponds to a respective class among the set of classes yielded as intermediate outputs from a given classifier instance(para 0168) using a logit function to calculate a value for each utterance instance so to fit a distribution of probability values of classes (para 0169) with that utterance, yielding a logarithm of odds which can be weighted by a centroid effect (para 0182) in accordance with effect to correspond set of binary classifiers with a logit function (cross-entropy function) that underlies validity of each utterance target (para 0170), such that by adapting modification of this function the classification system would govern distance between the intermediate output and the centroid representation enabling finetuning the target (hyperparameter) of the training (para 0184); hence use of activation a cross-entropy function (para 0191) and managing it (Fig. 7-8) in conjunction with computed probability values of classes to correlate correspondence of an item of the utterance input with a set of classes generated as intermediate outputs by the NN entails improving prediction accuracy of the model in minimizing difference between class probability values output with a centroid representing a item (utterance) of the input stream.
That is, if the neural network as in Gutta is to weigh significance of a lexical item among the input text against probability values of intermediate classes generated by a stage of NN predictive model so to finetune accuracy of the lexical item target under evaluation by the training to identify the most relevant lexical item within a cross-entropy space in that probability values of respective intermediate classes output from a given classifier run are used with a loss function to attain the lowest distance separating each probability with the lexical item targeted by the training.
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement the neural network in determining a operation as part of the natural language text or document into Gutta NLP system so that confirming validity or intent of a feature among the words or lexical element of the natural language would include use of the Neural network to calculate for each class (or probabilities associated with classes) generated as intermediate output by a respective classifier run, one or more probability values so to correlate distance from the probability values (of the class) versus a centroid values under establishment of loss function as set forth in Xu, whereby the loss function and adjustment thereof enables minimizing the distance from the probability values respective to the centroid representation, for the NN to consolidate weight and significance of a given lexical element, or a word indicative of a function or operation among a set of thereof underlying “calling” activities within the business domain of smart-contract or blockchain network in Gutta; because
use of prediction model with layered execution or outputs of a neural network model, so that classes of intermediate output generated from topic vectors formed from a natural language stream of lexical elements from a text document or utterance statements can be evaluated against a minimizing function whereby probability values of the intermediate class of outputs can be correlated to the centroid representation of a lexical item targeted via hyperparametric setting of the Neural network model run would enable iterative improvement of the entropy space thereby minimizing the distance from probability of intermediate output respective to central point set by this minimizing function, thereby enable the NN model to determine the best and optimal set of output classes that best consolidate validity of the lexical item established as a hyperparameter to tune as part of the multi-layered classification model; e.g. whereby a validated lexical item as well as its contextual metadata can be set forth by the NN as an element/action of functional significance along with pertinent parameters to form a specific package by which domain-specific code generation can be initiated to programmatically suit the code calling of business activities that belong to the smart contract and blockchain domain in Gutta.
As per claim 11, Gutta discloses computing system as recited in claim 10, the memory further storing instructions that, when executed by the one or more processors, further cause the computing system to:
determine respective probabilities associated with a plurality of classes, each class in the plurality of classes corresponding to a respective operation in the set of operations.
(all of which having been addressed in claim 3)
As per claim 12, refer to rejection of claim 4.
Claims 5-8, 13-16 is/are rejected under § 35 U.S.C. 103 as being unpatentable over Gutta et al, USPubN: 2023/0038529 (herein Gutta) in view of Sellam et al, USPubN: 2022/0067309 (herein Sallam) and Stabler et al, USPubN: 2021/0224486 (herein Stabler) and further in view of Jalaluddin et al, USPubN: 2021/0304733 (herein Jalaluddin) and Nakao et al, USPubN: 2019/0179908 (herein Nakao)
As per claims 5-8, Gutta does not explicitly disclose computer-implemented method as recited in claim 4, the method further comprising:
(i) generating synthetic data from the real-world text statements written in natural language, the synthetic data defining new text statements written in natural language.
(ii) wherein generating the synthetic data further comprises replacing one or more words of the real-world text statements with one or more synonyms of the one or more words, so as to define the new text statements written in natural language that include the one or more synonyms.
(iii) wherein generating the synthetic data further comprises rearranging an original order of one or more words of the real-world text statements, so as to define the new text statements written in natural language that include words in a different order as compared to the original order;
(iv) wherein the training data further comprises the synthetic data such that the neural network is also trained on the synthetic data.
Gutta discloses use of TF-IDF vectorization as part of classifying element corpus of the document into categories or features from unstructured text into fixed length numerical representation, the TF-IDF being a form of synthetization that transforms a large corpus into most relevant grouping (fixed-length vector), based on weighing on frequency of occurrence of words (term frequency or TF), semantic contribution of each term, and/or its corpus-wide rarity (IDF), filtering of noise coupled with consideration of n-grams or phrase-level context.
As for (i) and (ii)
Performing synthetization of natural language data is shown in Sallam method of training a neural network where a pair of synthetic version (original and modified) of text passage can form the input to the training (para 0004), where plurality of sentence pairs to the neural network enable fine-tuning a grade allocated for each pair (para 0005), where creation of the synthetic sentence pair effecting random substitution by replacing words in passage A with replacement word resulting in a second passage B (para 0028) or random omission of text (para 0029); hence, synthetic transformation to the input text in terms of omission or substitution of the original text passage for generating input to the NN learning entails replacement or omission to form new version of the original text.
Similar to grouping of terms of contextual weight by a TF-IDF from a large unstructured corpus, replacement of the original words with equivalent terms as part of a synthetization of text or utterance data is shown in Jalaluddin’s natural language processing and configuration of training models, where training includes pre-labeling of text (phrase or sentence) in accordance to an intent, using data augmentation or irrelevant text addition (para 0116-0119) by way of randomly augmenting the original text with synonym insertion or replacement, position swapping or deletion, substituting word or n-gram level of the original stream with a vector and numerical score (TF-IDF score – para 0124) to automatically render more agnostic the synthetized aspect of the training set particularly altered for the training (para 0123) or render the training more robust (para 0117). Hence, modifying a statement with synonym, position swapping or word deletion entails generating of new statement resulting from the modification.
Nakkao discloses voice recognition or sentence evaluation by way of neural network (para 0114) for training information provided as n-gram models (para 0056, 0125; claim 5, pg. 11) via use of a controller that performs synthesis on the voice data (para 0127; voice recognition text – para 0079; S18 – Fig. 10), evaluate utterance value by correlating previous and present occurrence of the text utterance (para 0130), respective to an appearance probability (para 0080; Fig. 8) or predetermined threshold (Fig. 7) thereby to determine a new data (0089-0091; claim 1, pg. 11) from the voice text recognition process. Hence, performing synthesis of input text from utterance stream using a neural network set on n-grams model to derive a new data from the utterance text is recognized.
As for (iii),
Jalaluddin discloses transforming text for use in vector-based configuration of topic models (para 0124) using random position swapping (para 0123) as an augmentation to the original text as part of rendering input to a training more agnostic in the endeavor of improving likelihood for identifying a topic of weight or significance by the training.
Similarly, Stabler discloses adversarial training for NL using substitution to the original text in terms of replacing of words/phrases with equivalent words/phrases, by removing words or phrases, swapping characters, switching order of words or phrases in association with preserving composition of the original linguistic set and maintaining a semblance of symmetries (para 0036; permutation symmetries – para 0139), e.g. to simulate higher likelihood of misclassification (para 0006-0008) by the machine learning as part of adversarial training (Fig. 5-7), whereby model accuracy can be more effective (para 0124) over simple strategy of data augmentation.
As for (iv),
Sellam discloses training over the set of synthetized data in form of modified and original sentence pair (para 0004-0005)
Stabler discloses application of irrelevant data augmentation in a synthetic agnostic manner to natural language input into a training of original utterances (para 0030, 0111, 0121)
Thus, as restructuring incoming text stream via a synthetizing technique enables grouping and identifying phrases of significant weight and context, by which input into a training can be represented by a numerical entity/structure identifiable by a value that in turn would facilitate classification and evaluation thereof by stages of a neural network, the synthetizing of input to render a training of natural language stream or data set more robust is recognized.
Therefore, it would have been obvious at the time of the invention for one skill in the art to implement the training of real-world text statement in Gutta system so that the training is adapted with result from a initial data synthesis or decomposition into significant context grouping, including
1) generating synthetic data from the real-world text statements written in natural language, in accordance with synthesis capability to define new text statements in natural language – as shown in the replacement by Sallem and Jalaluddin; and identification of new data in Nakkao synthesis of utterance text;
2) generating the synthetic data with replacing one or more words of the real-world text statements with one or more synonyms of the one or more words – as shown in Jalaluddin, so as to define the new text statements written in natural language that include the one or more synonyms – as set forth in Jalaluddin or word replacement by Sallem.
3) generating the synthetic data with rearranging an original order of one or more words of the real-world text statements – as set forth in position swapping by Jalaluddin and words order switch by Stabler; so as to define the new text statements written in natural language that include words in a different order as compared to the original order;
4) the training data to further comprise the synthetic data such that the neural network is also trained on the synthetic data- as shown in Sallem and Stabler as set forth above; because
input in natural language most often comes in unstructured format that makes it difficult to parse out terms or grouping or phrase of application specific significance, functional weight or contextual merits, and by restricting the data into more organized or structure form, the structured or re-arranged format would enable particular group to be extracted and assigned with a value or a label, thereby facilitating the classification aspect of a NN or machine learning model configured to train the textual input, and using a synthesis stage to pre-process the initial unstructured text data in order to separate or distinguish portion thereof in terms of intentional text augmentation as set forth above, will enable the augmented or modified version thereof – via synonym replacement, word omission, or word swapping, or word order altering or creation of new statement – to reinforce the agnostic aspect of data synthetizing prior to submitting it to a training model, thus scaling up and enlarging possibilities of miscalculation and re-evaluation by the training in the course of filtering out unacceptable classification outcome and reaching more optimum score typical to a training that is geared for finetuning a hyperparameter or adapting to a minimization function, the augmented NL input provided with layers of a NN as in Gutta forcing the training to execute upon the altered version of synthetized input set, to iteratively eliminate unsatisfying dataset and ultimately realize a best classification score; i.e. on basis of an intentional text input augmentation by a synthesis stage that pauses or inserts adversarial motivation or added challenge to the target evaluation part of the training in the face of said imposed or added challenge.
As per claims 13-16, refer to rejection of claims 5-8, respectively.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tuan A Vu whose telephone number is (571) 272-3735. The examiner can normally be reached on 8AM-4:30PM/Mon-Fri.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Chat Do can be reached on (571)272-3721.
The fax phone number for the organization where this application or proceeding is assigned is (571) 273-3735 ( for non-official correspondence - please consult Examiner before using) or 571-273-8300 ( for official correspondence) or redirected to customer service at 571-272-3609.
Any inquiry of a general nature or relating to the status of this application should be directed to the TC 2100 Group receptionist: 571-272-2100.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/Tuan A Vu/
Primary Examiner, Art Unit 2193
February 04, 2026