Prosecution Insights
Last updated: April 19, 2026
Application No. 18/609,208

AUTOMATED GENERATION OF SOFTWARE TESTS

Non-Final OA §101§102§103
Filed
Mar 19, 2024
Examiner
VU, TUAN A
Art Unit
2193
Tech Center
2100 — Computer Architecture & Software
Assignee
Functionize Inc.
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
95%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
718 granted / 980 resolved
+18.3% vs TC avg
Strong +21% interview lift
Without
With
+21.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
31 currently pending
Career history
1011
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
54.1%
+14.1% vs TC avg
§102
10.2%
-29.8% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 980 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION This action is responsive to the Application filed 3/19/2024. Accordingly, claims 1-24 are submitted for prosecution on merits. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-3, 7-8, 14-15 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tomkins et al, USPubN: 2021/0295822 (herein Tomkins) As per claim 1, Tomkins discloses a method for generating a software application test case for an application, the method comprising: providing a machine learning model (see below; neural network – para 0104; neural network – para 0264); training the machine learning model (e.g. learning model using different sets of training data in a sequence ... train an instance with a first set of training data … train the pre-trained instance with a second set of training data, pre-trained machine learning model – para 0105) with a base user interaction training dataset (user interaction, modify vertex of the ontology graph, modifying a relationship based on the UI interaction … between pair of n-grams, change an n-gram weight or … value … update to the value … may cause further updates to the set of neural network weights … or other learning model parameters – para 0322; train a machine learning model in a first stage based on … set of queries and responses … predetermined set of questions and answers – para 0105; see interactive elements, update a hierarchical set of graphs, edit a connection between a first vertex … and a second vertex, UI element may re-arrange blocks representing workflow operations - para 0167-0169; ontology graphs- para 0170; visual indicators based on … ontology graphs … send data … plurality of request-response exchanges – para 0303 ) to produce a base model (see pre-trained instance from above; pre-trained transformer (GPT) language model – para 0160; a pre-trained neural network, pre-trained transformer language model … may use a subset of the n-grams of an initially-obtained query – para 0160; initialized with a pre-trained model … reduced-scope training – para 0264; pre-trained head 1212 – para 0183); and training the base model with an application-specific training dataset (use the initial set of parameters of the pre-trained … to generate output that is usable as an input for a set of task-specific layers …during the training of model 1214 – para 0183) to provide a fine-tuned model (updating the learning model … may perform a set of fine-tuning operations … by the fine tune training function 1220, function 1220 may apply from dataset 1206 … data specific to an account or organization – para 0186) for an application. As per claim 2, Tomkins discloses method of claim 1, further comprising generating by the fine-tuned model a software application test case (domain “medical tests” – para 0305) comprising a sequence of user actions (n-grams and associated set of visual indicators … UI elements … may be interacted with …set of requests to a server based on an input or configuration of the UI, a second message may include a n-gram indicated by a user and an update value corresponding to the n-gram – para 0306-0307) on a graphical user interface of the application. As per claim 3, Tomkins discloses method of claim 1, wherein the machine learning model is a generative pretrained transformer (generative pre-trained transformer – para 0160). As per claim 7, Tomkins discloses method of claim 1, wherein the application-specific training dataset (refer to claim 1) is provided by recording user actions (data ingestion, processing workflow, dynamically update a UI with a workflow blocks – para 0336; interactive elements, update a hierarchical set of graphs, edit a connection between a first vertex … and a second vertex, UI element may re-arrange blocks representing workflow operations - para 0167-0169) on a graphical user interface of the application. As per claim 8, Tomkins discloses method of claim 7, wherein the dataset of sequential user actions is provided in a standard format for data interchange (interaction with the user … send to a structured data store as JSON document – para 0316 ; causes an UI … to render text from a natural language text … rendered text includes a set of visual indicators …indicating words or n-grams … send UI data that includes structured data … used to display the UI … interpret the JSON file and update a UI based on the JSON file – para 0304). As per claim 14, Tomkins discloses a system for generating a software application test case for an application, the system comprising: a machine learning model; a base user interaction training dataset; and an application-specific training dataset. (all of which having been addressed in claim 1) As per claim 15, Tomkins discloses system of claim 14, wherein the machine learning model is a generative pretrained transformer. (refer to claim 3) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4-6, 10, 13, 16-18, 20, 22 is/are rejected under § 35 U.S.C. 103 as being unpatentable over Tomkins et al, USPubN: 2021/0295822 (herein Tomkins) in view of Li et al, CN 114238070, (translation), 03-25-2022, 12 pgs (herein Li) and Fong, USPubN: 2018/0349256 (herein Fong). As per claims 4-5, Tomkins does not explicitly disclose method of claim 1, wherein the base user interaction training dataset comprises a large corpus of manually scripted software test cases; wherein each manually scripted software test case of the large corpus of manually scripted software test cases is provided in a standard format for data serialization and/or data interchange. Tomkins discloses text-based dataset as large corpus (corpuses … implemented with ontological objectives – para 0122; UI elements, user clicks on the first button, interaction with a third button … display … text documents from a corpus of text data – para 0354) representing base (interactive elements, update a hierarchical set of graphs, edit a connection between a first vertex … and a second vertex, UI element may re-arrange blocks representing workflow operations - para 0167-0169; ontology graphs- para 0170) for the user interaction training set (data ingestion, processing workflow, dynamically update a UI with a workflow blocks – para 0336) which can be provided in data interchange format or data serialization format (data serialization formats, XML, JSON – para 0336); where a script can be used to implement various processing modules (para 0116) or query instances directed at the ontology data/vertices (para 0047) where operations to verify precision of dataset (e.g. SQUAD test dataset) comprising question-answer retrieval may achieve improvement to a role-based domain of the operations facilitating AI categorization from basis of relationships associated with the oncology vertices of the graph (para 0050). Use of pre-training of semantic recognition is shown in Li as generating test case, provided as pre-obtained from test case store (pg. 2) in form of extracted test script (pg. 2) to judge on dis-ambiguity associated with a predicate representation of translated keywords (pg. 8) obtained based on a pyTorch pre-training model (pg. 3), the semantic analysis by the test as part of disambiguation of the n-grams grammar/aspect of the language/speech (claim 4-5, pg. 11; disambiguation to … part-of-speech tagging process – pg .3) and making each sentence component semantically clear (pg. 4); hence use of test script included in pre-training to re-arrange or perform disambiguation to language (keyword translation) or grammar destined for semantic recognition under a neural network deep learning/translation (pg 8) is recognized. Further, Fong discloses test cases generated based on received natural language strings that are destined for a trained neural network in conjunction with a reinforcement learning model (see abstract), including legacy test automation data for a pre-training to identify correlation of the received input/dataset to an intent and functional aspect of the language, using the test to perform the correlating (par 0007-0008), so that the pre-trained neural network can make use of weights or values associated with interconnected nodes of the network, where dataset into the pretraining include natural language description of steps or user interactions with visual elements (para 0009), the pre-training configured via test scripts to pre-classify NL description, subclassify a description/sentiment (para 0029) and assigning weights, the latter based on performance or accuracy score generated from the test script (para 0027-0028) in support of the reinforcement learning engine that is configured to identify set of actions and pre-defined values representative of computations that can be made available within a target application (para 0032-0033). Hence, use of test case or scripts underlying a pretraining of NL input or language representing user interactions into a reinforced learning engine is recognized. Therefore, as HTML structure, script, schema (XML, JSON) can be viewed as manually scripted structure or rendition, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement the pre-training in Tomkins so that disambiguation of description language or syntax elements as well as pre-classification thereof to facilitate a subsequent classification or semantic recognition engine, would be using test cases implemented as manually formed script or standard/structured test case format – as set forth in Li and Fong – in the sense that training set provided as base user interaction training set represent a large corpus of manually scripted software test cases – as per test case store in Li – with each manually scripted software test case of the corpus provided in a standard format for data serialization and/or data interchange; because script as a standard structured format utilized for manually implementing text-based test case can be executed without use of a dedicated compiler, enabling this standard format to be interpreted within HTML, a browser or web friendly environment, and provision of test cases retrievable from of large corpus prestored in available test cases storage as set forth above (see Li) would accelerate test configuration and testing of data being pre-trained, as in Tomkins, in accordance with pre-classification of text stream or NL representation of user interactions for which input element of weights or indicative of intent can be correlated (via effect of the test script) from within the semantic or grammar context of interactive operations and accordingly be assigned/quantified with commensurate merits or weight values, based on which the quantified merits can be deployed (embedded) as a vector configuration input into the actual training (follow-up to the pre-training stage), enabling this (neural network) stage as in Tomkins, to assess and determine the most appropriate set of actions or recommendations commensurate to the received language expressing the user described UI activities (as in Fong), on basis of the pre-classified items of significance and intent-driven correlation achieved from the test script portion of the pretrained model. As per claim 6, Tomkins does not explicitly disclose method of claim 1, wherein the base user interaction training dataset comprises a plurality of software test cases for a wide range of applications, a wide range of use cases, and/or has a reasonable distribution of test case lengths. But pre-store of test data in form of cases that can be retrieved and activated as scripts is shown in Li data storage (test case store - pg. 2)); hence available data storage of SW test cases and test data to provision for a wide range of test applications, range of use cases is recognized. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement provision of pre-training test and pre-classification of input data in Tomkins’s ontology-based machine learning so that the base user interaction training dataset comprises a plurality of software test cases for a wide range of applications, a wide range of use cases provided as large corpus storage of test cases; because pre-existing availability of test cases and potential capability thereby to provide for various test scenarios, diverse type of use cases or wide variety of test applications would ease test developers with prompt identification, retrieval and formation of a desired test case so that immediate semantic analytics, statement recognition and identification of keyword significance by one such test case executing according to pre-training stage as set forth above, would enhance the lexical grouping or semantic pre-classification of natural language input when textual description representative of user data is to be trained by a subsequent machine learning stage as in Tomkins; and that in-depth learning by the latter would be able to derive user intent and numerical qualification on interactions/operations of significance associated with that intent, which in turn would enable the training to generate set of actions or configuration suggestion, design or deployment set provided as recommendations in response to the user request for deep learning as in Tomkins. As per claim 10, Tomkins does not explicitly disclose method of claim 1, further comprising inputting to the fine-tuned model a sequence of user actions on a graphical user interface of an application to generate a software application test case comprising a sequence of user actions on a graphical user interface of the application; scoring the software application test case using a reward model; training a reinforcement learning model; and adjusting weights of the fine-tuned model. But input user interactions in natural language or standard format such as Json is shown in Tomkins (para 0304; interaction with the user … send to a structured data store as JSON document – para 0316) Generating test case to support pre-classification of semantic of significance (intent-driven keywords or words) via a pre-training for effect of configuring vector input (para 0015, 0018) into a in-depth learning model is shown in Fong; where datasets into the pretraining include natural language description of steps or user interactions with visual elements (para 0009), the pre-training configured via test scripts to pre-classify NL description, subclassify a description/sentiment (para 0029) and assigning weights, the latter based on performance as part a reward model/function (para 0023-0025) that ingests and stores accuracy scores (para 0028) generated from the automation test script (para 0027-0028) in support of the reinforcement learning engine (para 0033-0035) that evaluates merits (para 0034, 0090) stored in the reward model with respect to vindicated significance of UI actions, the convolutional NN underlying this reinforcement learning following the paradigm of neural network inference or training per effect correlating and connecting features and their weights for their tuning over a period time of a task (UI actions) to be performed from neurons performance and weights included with a neural network (para 0030) Hence, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement the stages of training of user interface data in Tomkins in-depth model training so that finetuning or in-depth training of the model includes 1) inputting a sequence of user actions on a graphical user interface of an application to generate a software application test case – as in Fong - comprising a sequence of user actions on a graphical user interface of the application – as in Fong; 2) scoring and recording thereof by the software application test case using a reward model/function as in Fong; 3) training a reinforcement learning model as in Fong and adjusting weights of the fine-tuned model based on the reinforcement and re-evaluation of rewards; because automated test cases to acquire performance metrics from data grouping by a pre-training classification of user disparate type interactions supplied as text-based input into a deep learning (reinforce learning engine) would help the pre-classified data to be assigned with score – as via a reward model - which in turn would enable re-evaluation of the quantified reward by a reinforcement learning engine that correlates the performance reward in association with the pre-classification of original set of UI actions, so to grant proper merits to the functions or user actions recognized from the pre-training, and prioritize implementation to all functions/actions deemed most worthed of the reward, for the benefit of a relevant user. As per claim 13, Tomkins does not explicitly disclose method of claim 2, further comprising generating executable code in a programming language to perform the software application test case. But use of test cases implemented from natural language text provided in manually scripted file in Fong can alternatively be automated as compiled binaries of object codes (para 0013) Hence, it would have been obvious at the time of the invention for one skill in the art to implement testing of the preprocessed input stream into the AI finetuning sequences of machine learning so that base user interaction training dataset can be reorganized with test arrangement, where test case can be in script format as well as executable code in a programming language format, both automated or configured to perform the software application test case; because immediate semantic analytics, statement recognition and identification of keyword significance by test cases notably from those provided in programming language and compiled in binary format for very fast execution as part of a pre-training stage as set forth above, would boost up performance associated with action geared for the lexical grouping or semantic pre-classification of natural language input when textual description representative of user data is to be trained by a subsequent machine learning stage as in Tomkins; and that in-depth learning by the latter would be able to derive user intent as well as re-organization, assessment of user interactions or UI operations of significance matching that intent, which in turn would enable the training to generate set of actions or configuration suggestion, design or deployment provided as recommendations in response to the user request for deep learning as in Tomkins. As per claims 16-17, refer to rejection of claims 4-5 respectively. As per claim 18, refer to rejection of claim 6. As per claim 20, refer to claim 8 As per claim 22, Tomkins discloses system of claim 14, further [sic] a reward model and a reinforcement learning model. (refer to rationale of claim 10) Claim(s) 9 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tomkins et al, USPubN: 2021/0295822 (herein Tomkins), in view of Burgis et al, USPubN: 2022/0116415 (herein Burgis) and Chen, Wen-Ke, CN 114117240 (translation), 07-08-2022, 31 pgs (herein ChenWK) As per claim 9, Tomkins discloses method of claim 1, wherein the application-specific training dataset comprises: (i) data describing sequences of user actions performed by a number of users interacting with the application; (ii) data describing sequences of user actions performed by a number of users interacting with the application on a number of devices; and/or (iii) data describing sequences of user actions performed by a number of users interacting with the application at a number of different times. As for (ii) and (iii), Burgis discloses extraction of data for vector configuration (para 0046-0047) or grouping of insights in relation to machine learning trained classifiers using actionable insights on determined intent and interactions (para 0160), where the training datasets comprise a) user interactions captured within one or more predetermined of time windows to be mapped to likelihood prediction for each ML based model (para 0162) and where extraction module for accessing interaction dataset includes b) interaction between communication devices, including customer device and service provider device, the classification of such interactions to identify actionable insight categories based in part on the intent of a corresponding interaction (para 0084); hence training dataset describing sequences of actions performed by user interacting with the application at a number of different times – per a) and on a number of devices – per b) is recognized. As for (i) Training dataset obtained from social interaction log is shown in ChenWK as comprising plurality of reference social group interaction, where the training is trained over vector to seek whether convergence based on (interaction intention) vector representation of a first community interaction log will match convergence based on (interaction intention) vector representation of a second community interaction log (bottom pg. 28 to pg. 29),each interaction intention vector set as reference for a respective interaction log in a plurality of user community interaction activity (pg. 28). Hence, vector configuration for AI training comprising training dataset indicative of sequences of user actions performed by a number of users interacting with the social application is recognized. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement content of training dataset in Tomkins so that for configuring a application-specific training, the corresponding dataset would comprise: 1) data describing sequences of user actions performed by a number of users interacting with the application - as set forth in ChenWK; 2) data describing sequences of user actions performed by a number of users interacting with the application on a number of devices – as set forth in Burgis; 3) data describing sequences of user actions performed by a number of users interacting with the application at a number of different times; as set forth in Burgis; because in the endeavor of structuring/preparing training data associated with classifying interactive elements or interacting entities associated with business, application, network environment for effect of recommending the most optimal functions, software implementations and options by which interactions and communication of information between the entities improve in performance and become more secure and effective, proper recording of information associated with the above recommendation purpose, would necessitate inclusion of activity type, the context or scale thereof, nature and number of entities or machines involved, as well as time information and occurrence frequency with which the activities occur; and by documenting or logging sequence of interactions in accordance to number of interacting users, number of devices in which their interactivity occurs, the number of times by which they occur, this well-defined set of information can be employed (pre-trained) for prompt categorization by intent, pre-classification per significance score, such that, based on the weight allotted to respective pre-category from the pre-training, additional AI fine-tuning runs can be deployed to find the most optimal/actionable recommendation in form of deployable component or software that best respond to various type of interaction paradigm, including a type that befits a numeric scale of participants/users, a type that can support defined amount of machines or devices involved in the communication network and/or the type that can accommodate interactivity demand that repeated reoccurs, extends or cycles over a time period. Claims 11, 23 is/are rejected under § 35 U.S.C. 103 as being unpatentable over Tomkins et al, USPubN: 2021/0295822 (herein Tomkins) in view of Li et al, CN 114238070, (translation), 3-25-2022, 12 pgs (herein Li) and Fong, USPubN: 2018/0349256 (herein Fong), further in view of Zhuang et al, USPubN: 2021/0174209 (herein Zhuang) As per claim 11, Tomkins does not explicitly disclose method of claim 10, wherein the reinforcement learning model is a Proximal Policy Optimization (PPO) model. But use of machine learning to finetune over previous stage of pre-classification of relatively raw training set was a known practice, including the finetuning in form of reinforcement learning model such as in Fong (para 0165-0168, 0174; Fig. 2); and implementing this reinforcement learning into a Proximal Policy Optimization (PPO) model was also a known practice; and this is shown in Zhuang finetuning of neural networks, where consecutive update(s) to parameters of successive neural network runs includes improving a gradient based a loss function evaluated a later neural network relative to the loss function considered from a previous NN run, per a reinforcement learning algorithm that utilizes a proximal policy optimization algorithm (para 0012, 0024, 0036) for redefining a trust region of probability gradient. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement finetuning of convergence function by successive AI execution in Tomkins so that the AI fine-tuning effects can achieve a reinforcement learning that can adopt optimization algorithm in the type of Proximal Policy Optimization (PPO) technique as in Zhaung; because PPO approach would better stabilize likelihood of swaying by the parameters being tuned under the reinforcement learning, thereby reducing unreliable zone to be processed for convergence in regard to a loss function revisited from respective neural network runs, rendering the reinforcement learning less vulnerable to updates volatility and unreliable convergence outcome. As per claim 23, Tomkins discloses system of claim 22, wherein the reinforcement learning model is a Proximal Policy Optimization (PPO) model. (refer to rationale of claim 11) Claims 12, 24 is/are rejected under § 35 U.S.C. 103 as being unpatentable over Tomkins et al, USPubN: 2021/0295822 (herein Tomkins) in view of Paul, USPubN: 2022/0116873 (herein Paul) As per claim 12, Tomkins does not explicitly disclose method of claim 1, further comprising: providing a runtime agent; requesting by the runtime agent a predicted next step from the fine-tuned model for an application at runtime of the application; and executing the predicted next step on the application. Paul discloses inferencing engine execution on decision state on corresponding video, media requests using scheduling assistance by a intelligent QoS runtime agent effect of fetching of a next request deemed in tune with a condition, in support of rendering decision update associated a trained model configured (para 0045) for predicting parameters configurable with streaming in accordance to QoS; hence, executing a predicted step based on loading/prefetching a conformant request invoked by a runtime agent to enable next decision rendering by an inference engine underlying QoS-conforming predictive training of streaming parameters entails requesting by a runtime agent input load into a next finetuning engine associated with a predictive model. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement AI finetuning and pre-training of raw data (incoming requests) as part of carrying out application-specific training in Tomkins so that pretrained information is under control of runtime agent to support loading of proper input into a finetuning phase of the AI sequenced execution in Tomkins, so that a request executing under the runtime agent feeds to proper input (compliant request) into a inference engine of the finetuning stage, based on which a next prediction step can render decision in accordance to flow of prediction by the AI finetuning paradigm; because use of runtime agent as a pluggable and self-contained program entity configured to request proper loading in support for a desired predictive rendering by a training AI model would boost effectiveness of a pre-classifying stage as in Tomkins wherein employ of the agent software pre-positioned in conjunction with the pre-training to properly assist with proper input load for the AI training engine to render a next prediction step with an outcome deemed more conducive with the finetuning aim in the sense that effect of such rendering would progressively shorten time complexity in achieving input/output convergence targeted by the finetuning aspect of the training process. As per claim 24, Tomkins discloses system of claim 14, further comprising a runtime agent. (refer to rationale of claim 12) Claims 19 is/are rejected under § 35 U.S.C. 103 as being unpatentable over Tomkins et al, USPubN: 2021/0295822 (herein Tomkins) in view of Yuile, USPubN: 2021/0157583 (herein Yuile) As per claim 19, Tomkins does not explicitly disclose system of claim 14, further comprising a code snippet comprising instructions for recording user actions on a graphical user interface of the application to provide a dataset of sequential user actions. Yuile discloses code snippets articulating logging behavior of different applications for use in a architecture overview for program code analysis, runtime analysis and investigation analysis of code execution (para 0014-0015; Figs 5) the snippet provide as Log Message string (Fig. 9A, 9B) and illustrating logging of different applications inside an event-handling application (para 0061) It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement recording of user interactions and UI activities in the pretraining of Tomkins so that code snippet – as in Yuile - comprising instructions for recording user actions on a graphical user interface of the application to provide a dataset of sequential user actions; because snippets of code can be portable, created on the fly, and easily integrated inside event-handling applications or program code analyses, or runtime analysis where behavior or events captured by the snippets when plugged inside these application can provide immediate insights or patterns by which the application can derive analytic data or behavioral intent of significance with which to render proposed options or problem solving component as response to clients requesting recommendations and analytics service. Claims 21 is/are rejected under § 35 U.S.C. 103 as being unpatentable over Tomkins et al, USPubN: 2021/0295822 (herein Tomkins) in view of Sianez, USPubN: 2021/0191925 (herein Sianez) As per claim 21, Tomkins does not explicitly disclose system of claim 14, further comprising a dictionary that comprises a list of integers and a vocabulary of words or subwords and defines a one-to-one correspondence between each integer of the list of integers and each word or subword of the vocabulary. Sianez discloses a processing framework (para 0046) with query of term identity as part of extracting feature of data batch from corpus of a dictionary (para 0035, 0049) that includes terms, references and numerical data in relation to words, pronouns, adverbs etc. metadata, and special characterizations, all as query features configured with identifiers, where each feature for such query term identifier can be associated with unique hash value, the numerical representation thereof susceptible of defining a feature vector provided as vector representation library supporting machine learning associated with text-to-speech training model (para 0031) Hence, use of dictionary corpus comprising words, subwords, and numerical representation configured to provide one-to-one correspondence between a unique number (among a list of hash values) and each word of the vocabulary is recognized. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to implement the natural language processing and feature extraction of request in Tomkins pretraining stage so that understanding the natural language part of the user interaction in this stage would include corpus information -as in Sianez - from a dictionary, the latter comprising list of integers such as a hash identifier and a vocabulary of words (or subwords) and defining a one-to-one correspondence between each integer of the list of integers (hash value) and each word or subword of the vocabulary as set forth in Sianez; because use of a dictionary as corpus of words and subwords serving as reference standard within the process of validating and categorizing natural language terms or concepts encountered with the incoming requests of Tomkins AI intelligence system would consolidate lexicographic weight of a given word and possibly allow intent therefrom to be identified, and use of numerical reference provided as number-word (one-to-one) correspondence as set forth above would enable this numerical representation to populate a vector configuration – i.e. the one or more significantly filtered features or lexicographic entity being represented as a numerical value extracted from a dictionary from the NL preprocessing stage -- so that the so-expressed vectors can be input into a deeper learning stage whereby intent-driven formulation or application-specific text or UI patterns can be further subjected to a finetuning AI model, according to whose evaluation/assessment, the most optimal implementation recommendations can be determined and returned to the users whose interaction sequences are provided as incoming request into the initial processing stage. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 14 is rejected under 35 U.S.C. 101 because the claimed invention is directed to a non-statutory category of subject matter, and is directed to a Abstract Idea type of Judicial Exception. Claim 14 recites a system for software application test case for an application, comprising: a machine learning model; a base user interaction training dataset; and an application-specific training dataset. As included in the system, a model, a base training dataset and application specific data set cannot be seen as a cooperation of software with tangible, concrete hardware, machine and article of manufacture to support operation, realization of the software. The “model” and ‘data set” amount to a software per se type of subject matter, and accordingly cannot be any of the 4 categories of subject matter: a process, article of manufacture, a apparatus or a machine, and composition of matter. MPEP 2106.03 Software per se can be viewed as signal, data instructions in the abstract and may encompass signal like transitional waves, which cannot be stored in a concrete/physical medium. For instance, model and data set in absence of a computer-readable non-transitory medium can be viewed a non-functional Descriptive material or just “information”, which is not a statutory category among the four categories set forth above. The claim recited in terms of software per se fails to be categorized under MPEP 2106.03 and can also be directed to a Judicial Exception of an Abstract Idea type under MPEP 2106.04, from step I analysis. Analysis of dependent claims. Claims 15-18 describe additional details of the model, the dataset or test case; claim 19 describes code snippet instructions, claim 20-21 describe data set format and a dictionary; claims 22-23 describes another type of model; and claim 24 describe a runtime agent In all, the dependent claims fail to render the software per se deficiency of the claim 14 significantly more than a non-statutory category of subject matter. Claim Objections Claims 21-22 are objected to because of the following informalities: -claim 21 recites “vocubulary of words” a clear typo error; -claim 22 recites clause “, further a reward model and a reinforcement learning model.” which is devoid of a verb. Appropriate correction is required. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tuan A Vu whose telephone number is (571) 272-3735. The examiner can normally be reached on 8AM-4:30PM/Mon-Fri. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Chat Do can be reached on (571)272-3721. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-3735 ( for non-official correspondence - please consult Examiner before using) or 571-273-8300 ( for official correspondence) or redirected to customer service at 571-272-3609. Any inquiry of a general nature or relating to the status of this application should be directed to the TC 2100 Group receptionist: 571-272-2100. /Tuan A Vu/ Primary Examiner, Art Unit 2193 February 21, 2026
Read full office action

Prosecution Timeline

Mar 19, 2024
Application Filed
Feb 21, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596557
SYSTEM AND METHOD FOR GENERATING RECOMMENDATIONS FOR DATA TAGS
2y 5m to grant Granted Apr 07, 2026
Patent 12591718
Application Development Platform, Micro-program Generation Method, and Device and Storage Medium
2y 5m to grant Granted Mar 31, 2026
Patent 12585573
ASSEMBLING LOW-CODE APPLICATIONS WITH OBSERVABILITY POLICY INJECTIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12582796
METHODS, DEVICES, AND SYSTEMS FOR IMPROVED OXYGENATION PATIENT MONITORING, MIXING, AND DELIVERY
2y 5m to grant Granted Mar 24, 2026
Patent 12541384
COMPONENT TESTING FRAMEWORK
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
95%
With Interview (+21.4%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 980 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month