Prosecution Insights
Last updated: April 19, 2026
Application No. 18/389,126

SYSTEM AND METHOD FOR AI-BASED LOAN PROCESSING

Final Rejection §101
Filed
Nov 13, 2023
Examiner
GREGG, MARY M
Art Unit
3695
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Ibusiness Funding LLC
OA Round
6 (Final)
14%
Grant Probability
At Risk
7-8
OA Rounds
5y 3m
To Grant
28%
With Interview

Examiner Intelligence

Grants only 14% of cases
14%
Career Allow Rate
89 granted / 629 resolved
-37.9% vs TC avg
Moderate +14% lift
Without
With
+14.3%
Interview Lift
resolved cases with interview
Typical timeline
5y 3m
Avg Prosecution
63 currently pending
Career history
692
Total Applications
across all art units

Statute-Specific Performance

§101
31.3%
-8.7% vs TC avg
§103
37.2%
-2.8% vs TC avg
§102
12.2%
-27.8% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 629 resolved cases

Office Action

§101
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The following is a Final Office Action in response to communications received January 12, 2026. Claim(s) 10-20 and 31have been canceled. No Claims have been amended. No new claims have been added. Therefore, claims 1-9, 21-30 and 32-34 are pending and addressed below. Priority Application No. 18/389,126 Filing date: 11/13/2023. Applicant Name/Assignee: iBusiness Funding LLC Inventor: Levy, Justin Response to Amendment/Arguments Claim Rejections - 35 USC § 101 Applicant's arguments filed 06/30/2025 have been fully considered but they are not persuasive. In the remarks applicant points to the 2019 USPTO guidance, 2024 guidance update with respect to artificial intelligence, arguing the previous Office Action misapprehends the nature of the claimed subject matter failing to apply the two-step framework established by Alice and the recent PTAB decisions. Applicant argues the claimed subject matter under step 1 is patent eligible as the limitations are directed toward specific technological improvements in machine learning (ML) training and deployment using distributed ledger technology. Applicant’s argument is not persuasive. The examiner notes that applicant does not identify what ML technology is improved. The limitations as claimed recite collecting data, analyzing data and then manipulating data into feature vectors for input into the ML model. The limitations are silent with respect to details of how the model is “trained”, instead merely recites high level function “generate…a predictive model….a newly created model” applied to control how assets are transferred. The limitations recite the “train” process as “collecting via a smart contract, data…”and “storing the collected data on a distributed ledger”, “testing and refining, iteratively the predictive model…” and “storing, on the distributive ledger the trained predictive model…”. The “train” further includes “train the predictive model based on information…”, where data is retrieved and analyzed by the model to determine a recommendation for lending. Accordingly, the “train” of the limitations lacks technical description and only depends upon the data collected and inputted where the process is not directed toward improvement in the training of the model but rather the data collected, analyzed and the results outputted by the model for recommending lending activity. The specification is equally silent with respect to how the ML model is trained and makes clear the focus of the invention is not to improve ML technology. Rather the specification recites applying ML technology for use in analysis of borrower related data for lending decisions. [0031] In one embodiment of the present disclosure, the system provides for an AI and machine learning (ML)-generated loan approval parameters based on analysis of a borrower's-related data. In one embodiment, an automated decision/approval model may be generated to provide for lending recommendation parameters associated with the borrower. The automated decision/approval model may use historical borrowers' data collected at the current lending facility location (i.e., a bank or lending institution entity) and at lending facilities of the same type located within a certain range from the current location or even located globally. The relevant borrowers' data may include data related to other borrowers having the same parameters such as age, financial conditions, language or locations, etc. The relevant borrowers' data may indicate successfully approved loans and indication of a loan processor (i.e., a loan officer, a lending specialist, or an underwriter) who processed the loan applications for the borrowers of the same parameters and the lending institution where the loan processing and underwriting was performed. This way, the best matching loan processing practitioner may be directed to respond to a given borrowers application based on current borrower-related data and historical data of servicing borrowers having the same characteristics such as age, language, financial condition, location, etc. [0032] In one disclosed embodiment, the AI/ML technology may be combined with a blockchain technology for secure use of the borrower-related data and borrower-related interview or questionary data. In one embodiment, the lender or loan processing entities may be connected to the lending server (LS) node over a blockchain network to achieve a consensus prior to executing a transaction to release the loan approval/disapproval verdict and/ or lending recommendation for the borrower based on the lending parameters produced by the AI/ML module. The system may utilize borrower's and/or borrower related data assets based on the borrower entity and the lender entities being on-boarded to the system via a blockchain network. [0033] The disclosed process according to one embodiment may, advantageously, eliminate the need for the lending practitioners to analyze the borrower-related data using additional processing of borrower's documents and/or transcripts produced by the NPL processing. Instead, the loan approval/disapproval verdict and lending recommendations may be produced directly on a granular level based on the borrower and borrower associated digital data according to the AI-based predictive analysis and lending recommendations. [0034] This process includes transparent lending recommendations/verdict mechanism that may be coupled with a secure communications chat channel (implemented over a blockchain network) which supports both parties to set and agree on the loan processing and terms with each other. In one embodiment, the chat channel may be implemented using a chat Bot. [0035] As discussed above, the disclosed embodiments provide a process for loan applications processing using AI and machine learning techniques. The disclosed method involves the following steps: [0036] A borrower applies online through a digital intake form provided by a borrower entity implemented on PC, notebook, tablet or mobile device. The borrower's data is generated from supplied data fields. Then, additional borrower-related documents are added to the borrower's data including but not limited to driver's license, tax returns, business profit and loss statements, and balance sheet over the last two years. [0037] In one embodiment, the system may OCR all of the documents and categorize, correctly label them and identify what they are. The system may then use machine learning module (ML) to check the documents against other documents that have been received from other approved borrowers with similar parameters such as age, location, language, financial conditions, etc. The ML may be trained over many different data points to detect similarities and also differences between the applying borrower and approved borrowers. The ML module hasted on a lending server may then categorize the similarities and differences and may provide feedback to the borrower in an automated fashion. The feedback may indicate some missing data or documents or may indicate a probability of getting the loan application approved. [0039] The lending server may receive additional borrower data (i.e., financial details) and may auto input the financial details into an underwriting calculator and create a credit memo. In one embodiment, the interactions between underwriters and sales professionals may be complied into a large training set of data. Then, the lending server may create the questions from underwriting and submit them to sales or the borrower directly depending on the lead source (if there is a sales person to them, if direct lead then directly to the borrower). The borrower or the sales rep will then have an opportunity to automatically and digitally supply the answers to those questions which will then inform the system and complete the credit memo. [0059] In one embodiment, the LS node 102 may receive the predicted lending parameters from a permissioned blockchain 110 ledger 109 based on a consensus from the lender entity nodes 113 confirming, for example, loan approval/ disapproval verdict, payment plan, schedule and other loan conditions. Additionally, confidential historical borrower-related information and previous borrowers' -related lending parameters may also be acquired from the permissioned blockchain 110. The newly acquired borrower related data with corresponding predicted loan verdict and lending recommendation parameters data may be also recorded on the ledger 109 of the blockchain 110 so it can be used as training data for the predictive model(s) 108. In this implementation the LS node 102, the cloud server 105, the lender entity nodes 113 and borrower entities(s) 101 may serve as blockchain 110 peer nodes. In one embodiment, local borrowers' data 103 and remote borrowers' data 106 may be duplicated on the blockchain ledger 109 for higher security of storage. [0060] The AI/ML module 107 may generate a predictive model(s) 108 to predict the lending verdict and/ or lending recommendation parameters for the borrower 111 in response to the specific relevant pre-stored borrowers' -related data acquired from the blockchain 110 ledger 109. This way, the current lending verdict and/or lending parameters may be predicted based not only on the current borrower-related data and current borrower call data, but also based on the previously collected heuristics and borrowers' -related data associated with the given borrower 111 data or current lending parameters generated based on the borrower data and call data. This way, the most optimal way of handling the borrower's loan application, such as the best loan specialist(s) is selected for processing the loan application of the borrower 111, for the most likely successful closing. After the lone is closed, the related documents may be converted into unique secure NFT assets to be recorded on the blockchain to be used for lending model training. Accordingly, under step 2A prong 1, the claimed subject matter is not directed toward improvement to ML technology when considered as a whole but rather analysis of borrower data in order to output recommendations for lending activity for the transfer of assets. Such subject matter is encompassed in the abstract category of fundamental economic practices and commercial/legal interactions. The rejection is maintained. In the remarks applicant repeats that the claimed subject matter is directed toward a specific technological solution to technical problems in the field of ML model training, digital asset control systems and distributive ledger implementation by performing a series of interrelated functions. Explicitly applicant points to the “acquire, over a network, user data…” which establishes data acquisition a foundational element. Applicant’s argument is not persuasive. The acquisition of data is recited at a high level lacking any technical details as to the technical process for performing the “acquire” operation and therefore, is insignificant extra solution activity. Applicant further argues that the claim limitations recite “convert …data from first format to second format enabling analysis of the …documents”, “analyze, the user data …by performing computational analysis on the user data”, “determine, based on …analysis, a plurality of features”, “search over the network local user database based on query…causing retrieval of …user related data, the retrieved data…comprising …documents…”. The limitations include “generate …feature vector…” which is the creation of mathematical representation of data for machine learning analysis. . The “convert…format” and “generate…vector” is not an attempt to improve upon the technology for data formatting or improve upon processing data for learning algorithms, but instead mere data manipulation for use in applying an algorithm to analyze data. The converting from one format to a second format and generating feature vectors are mere data manipulation. For data, mere “manipulation” of basic mathematical constructs [i.e.,] the paradigmatic ‘abstract idea,’" has not been deemed a transformation. CyberSource v. Retail Decisions, 654 F.3d 1366, 1372 n.2, 99 USPQ2d 1690, 1695 n.2 (Fed. Cir. 2011) (quoting /n re Warmerdam, 33 F.3d 1354, 1355, 1360 (Fed. Cir. 1994). Whether the transformation is extra-solution activity or a field-of-use (/.e., the extent to which (or how) the transformation imposes meaningful limits on the execution of the claimed method steps). A transformation that contributes only nominally or insignificantly to the execution of the claimed method (e.g., in a data gathering step or in a field-of-use limitation) would not provide significantly more (or integrate a judicial exception into a practical application). Mayo, 566 U.S. at 76, 101 USPQ2d at 1967. The Supreme Court disagreed, finding that this step was only a field-of-use limitation and did not provide significantly more than the judicial exception. /d. See MPEP § 2106.05(g) & (h). Under step 2A prong 1, the claim limitations as a whole are not directed toward technical processes for data conversion using mathematical processes. But instead part of the collection of data that is manipulated for use in analyzing borrower data using learning technology. The additional limitations “execute …(ML) model …providing vector as input to the ML model”, “generate …a predictive model, …for controlling how digital assets are transferred…the model being stored in a distributed ledger…” arguing that the creation of the model that is stored in a distributed ledger enable operational usage and continued training a specific technological implementation. Applicant argues that according to federal court pointing to Enfish and McRO, claims directed toward improvements in computer technology and therefore, is not as a whole directed toward abstract subject matter. Applicant argument is not persuasive. Applicant has not explained how converting data into vectors for input into machine learning models, generating and training models without any technical details, executing models to analyze financial data for asset transfers where models are stored at a high level in distributive ledgers, improves any underlying technology. As discussed above in argument 1, the examiner maintains that as a whole the claimed subject matter is not directed toward improvement to a particular technology but rather to collect, analyze borrower related data in order to output recommendations for lender activity. The rejection is maintained. In the remarks applicant argues that under step 2A prong 2, the claimed subject matter is directed toward indications of patent eligibility by integrating any alleged abstract idea into a practical application. Applicant argues the claimed subject matter integrates any alleged abstract idea into a practical application as the claimed limitations improve the functioning of computer systems, ML technology and distributive computing. Applicant points to MPEP 2106.04(d), the Ex parte Desjardins decision warning against excluding AI innovations from patent eligibility. Applicant recites the limitations arguing that the limitations establish that the training process is improved though distributed validation mechanism where multiple peer nodes must reach a consensus regarding data validity a specific technical solution to data quality and integrity in machine learning datasets. Applicant is arguing limitations not claimed. The training as claimed of the ML model does not include any validation mechanism related to nodes or data. Claim 1 recites a smart contract corresponding to chaincode that associates a set of peer nodes that validate the data from the set of database. The distributive ledger is merely applied to store the generated model and is not tied to any training process. The training process as claimed merely inputs data which is used by the model for analysis. Applicant has pointed out how inputting data for analysis improves the training of ML technology. Consensus (see dependent claims 9 and 29 limitations) between nodes is not an innovative concept and is well established within nodes in distributed systems by which multiple nodes agree on a single source of truth ensuring all nodes of the system reach an agreement on same data, order of events or state. The specification is silent with respect to any technical process directed toward consensus between nodes. The rejection is maintained. In the remarks applicant argues that the limitation “storing the …data on a distributed ledger, the storing comprising cryptographically signing and time-stamping data, such storage reducing collection time when performing training of the model. Accordingly the limitations identifies technological improvements of the predictive model achieved, “such storage reducing collection time when performing the training of the model. The claim limitation go beyond mere generic storage on a ledger by “cryptographically signing and time stamping the data”. Applicant’s argument is not persuasive. Applicant has not explained how “storing collected data on a distributed ledger”, “storing on the distributed ledger the trained predictive model” a model reduces collection time when training the model. The limitation with respect to the language “storage comprising cryptographically signing and time stamping the data”, applicant has not explained how data being cryptographically signed or time stamped changes or modifies the capability of the collection time. Data that has been stored and encrypted as claimed is not tied to any process that might be interpreted as being able to reduce data collection when the data is retrieved. The specification also does not support applicant’s arguments. The specification nominally mentions encryption. [0082] In the example depicted in FIG. 4, a host platform 420 (such as the LS node 102) builds and deploys a machine learning model for predictive monitoring of assets 430. Here, the host platform 420 may be a cloud platform, an industrial server, a web server, a personal computer, a user device, and the like. Assets 430 can represent lending parameters. The blockchain 110 can be used to significantly improve both a training process 402 of the machine learning model and the lending parameters' predictive process 40 5 based on a trained machine learning model. For example, in 402, rather than requiring a data scientist/engineer or other user to collect the data, historical data (heuristics - i.e., borrowers'-related data) may be stored by the assets 430 themselves (or through an intermediary, not shown) on the blockchain 110. [0083] This can significantly reduce the collection time needed by the host platform 420 when performing predictive model training. For example, using smart contracts, data can be directly and reliably transferred straight from its place of origin ( e.g., from the LS node 102 or from borrowers' databases 103 and 106 depicted in FIGs. 1A -1B) to the blockchain 110. By using the blockchain 110 to ensure the security and ownership of the collected data, smart contracts may directly send the data from the assets to the entities that use the data for building a machine learning model. This allows for sharing of data among the assets 430. The collected data may be stored in the blockchain 110 based on a consensus mechanism. The consensus mechanism pulls in (permissioned nodes) to ensure that the data being recorded is verified and accurate. The data recorded is time-stamped, cryptographically signed, and immutable. It is therefore auditable, transparent, and secure. The rejection is maintained. In the remarks applicant points to the limitation “testing and refining, iteratively, the predictive model based on the stored data and subsequently collected and stored data.", arguing the iteration refinement process operations on cryptographically signed and time stamped data stored in the distributed ledger enabling continuous model improvement. Applicant’s argument is not persuasive. According to MPP 2106.05(d) II, performing repetitive calculations is well-understood and does not impose meaningful limits upon the scope of the claim. With respect to the testing and refining of the model the limitations lack technical disclosure merely recited high level functions with an expected outcome. The specification is equally silent with respect to technical disclosure and only nominally mentions the testing operations. [0084] Furthermore, training of the machine learning model on the collected data may take rounds of refinement and testing by the host platform 420. Each round may be based on additional data or data that was not previously considered to help expand the knowledge of the machine learning model. In 402, the different training and testing steps (and the data associated therewith) may be stored on the blockchain 110 by the host platform 420. Each refinement of the machine learning model ( e.g., changes in variables, weights, etc.) may be stored on the blockchain 110. This, advantageously, provides verifiable proof of how the model was trained and what data was used to train the model. Furthermore, when the host platform 420 has achieved a finally trained model, the resulting model itself may be stored on the blockchain 110. The focus of the specification is not the training of the learning model but instead the use of the model for analyzing data. The disclosed “training” in the specification is so high level any known testing or refinement operations can be applied known to one of ordinary skill in the art. The limitations specification does not disclose that any of the focus of the invention is to improve upon how models are refined or tested. The rejection is maintained. In the remarks applicant argues the claims recitation “the predictive model …newly created ML model for controlling how ….assets are transferred…” establishes key points in the creation of a models for a specifically designed purpose and the generation of the model includes storing the model in a distributed ledger, subsequent to usage and training. This enables deployment (usage) and improvement (training) providing a specific technical solution to model lifecycle management in ML systems where distributed ledger architecture enables continuous model refinement based on deployment maintaining immutable audit trails of model versions. Applicant’s argument is not persuasive. The creation of the models is high level lacking technical details that once created are stored into ledgers. The claim limitations do not tie the storing of the data or models into the distributed ledgers with the training except for the data collected for input that is applied by the model for analysis. The training as claimed is only based on data collected and inputted without any particular technical process. The usage of the model for analyzing data to output recommendations does not improve any technology, instead it merely applies technology for a commercial activity. With respect to the argument that the limitations provide continuous model refinement, applicant is arguing limitations not claimed. However, even if the refinement was continuous it is merely data dependent and not a particular technical to improve upon existing technology or to solve a problem rooted in technology. The rejections is maintained. In the remarks applicant argues the claim limitations recite a specific structure - distributed ledger with cryptographically signed and time-stamped data, smart contracts corresponding to chaincode associated with peer nodes and stored predictive models. Applicant repeats the argument that the storage methodology reduces collection time during training is analogous to Visual Memory decision and direction with respect to specific structures. Applicant’s argument is not persuasive. The distributed ledger is merely a field of use applied to store data and generated models without any details of technical implementation for use in the performance of applying ML models to analyze borrower data to output lender recommendations. The smart contract is merely applied for validating data and not to improve upon smart contract computer programming or any other underlying technology. The rejection is maintained. In the remarks applicant argues that the “record user lending verdict in the distributed ledger” and “train the …model based on information in the distributed ledger” as claimed establish feedback loop where operational outcomes are recorded and then continuously improving the model through additional training representing reinforcement learning and model refinement where the ledger ensures data integrity and temporal sequencing. Applicant’s argument is not persuasive. The ledger is only applied to store data which can be retrieved for analysis by the model and store generated models. As discussed above, the iterative refinement is not directed toward improving upon existing technology but instead is based on data inputted for analysis for outputting results related to lender verdicts. The training process of the model is not modified because the data applied for the analysis is further inputted, rather the results for lending decisions are impacted, not the technology itself. The rejection is maintained. In the remarks applicant argues that the claims recite the limitation “responsive to a request for transfer of …assets…, retrieve user data for other user and other local …user data” and “execute the …model to analyze the retrieved …data “ which establishes that the model has been improved through the distributed ledger methodology. Applicant’s argument is not persuasive. The combination of the limitations merely retrieve and analyze data and does not impact, improve or modify the capacity of the underlying technology. The rejection is maintained. . In the remarks applicant argues that the claim recites the limitation “determine, based on execution of the …model, a recommendation …” which specifies improved model produces specific recommendation. Applicant’s argument is not persuasive. The combination of the limitations where data is collected, inputted into a model for analysis and determines a recommendation is merely directed toward applying technology at a high level to analyze borrower related data for recommending lender activity. The rejection is maintained. In the remarks applicant argues that the claim recites additional improvements through integration of NLP capacities. The claim recites “establish, over the network, a secure chat channel, the chat channel comprising a large language model (LLM), chat bot”. The limitations recites a specific type of AI model capable of NLP understanding and generation. The recitation “communicates, via the secure communication chat channel, with the user via the chat bot, the communication comprising an exchange of information related to the recommendation”. Applicant argues the limitation establishes bi-directional communication through the LLM chatbot regarding the recommendation produced, enabling natural language interaction about the decisions. Applicant argues that significantly, the recitation “modify, based on communication via the chat bot, the recommendation” establishes the ML model is not static, where the LLM chatbot serves as an interface to enable users to understand decisions through natural language dialogue and provide feedback that modifies those decisions. This establishes an improvement by requiring modification that occurs based on communication with the user via the chat bot. Applicant’s argument is not persuasive. The application of the NLP algorithm and LLM chatbot are recited at a high level for use in transmitting data. The modification of recommendations are based on data inputted by users for further analysis by the model used to analyze the data based on new data. The underlying technology of the model, chatbot, NLP and LLM chatbot are not improved or operational capacity changed because users communication information related to the outputted recommendation. The rejection is maintained. In the remarks applicant recites the limitation “control, over the network, the transfer of …assets based on modified recommendations” arguing that the limitation ties the system to a concrete real world application that resulted from the LLM chatbot interaction. The applicant argues the limitations recite complete technological system where the ML predictions, trained using distributed ledger validated data with reduced collection time are refined through natural language interaction with users via LLM chatbot then used to control digital assets over a network. Applicant argues that such processes similar to Finjan is directed toward non-abstract improvement in computer functionality that enables machine learning systems to do more than they could do before, “train models with reduced collection time through distributed ledger storage with cryptographic signing and time-stamping validating training data through peer node consensus mechanisms executing chaincode, store models and decisions for continuous improvement, refined algorithmic recommendations through natural language interactions with users via LLM chatbot and control asset transfers based on recommendations. Applicant’s argument is not persuasive. With respect to Finjan, the focus of Finjan was to provide a solution to a problem rooted in technology of virus detection. Finjan provided software-based innovations with behavior based virus scans which provided an improvement over conventional identification of unknown viruses that avoid detection by conventional code matching scans. This process employed a new kind of file enabling computer security systems to perform virus detection process in a manner that previously could not be performed. The claimed model merely analyzes data that has been preprocessed and/or stored prior to input that is used for analysis to output a result. The application of the NLP is applied in its ordinary capacity used for enabling machines to understand, interpret and generate human language by processing chat data (text or speech). The applied LLM is nominally mentioned for use in the chatbot communication between the users and machine without any indications of an attempt to improve any of the communication, LLM or NLP technology. Rather than Finjan, the claimed subject matter is analogous to Trading Technologies International, Inc, v IBG LLC Interactive Brokers LLC, where the court determined the claims did not recite a technological feature that is novel and unobvious over the prior art because the patent indicates that the claimed technological features are known technologies. The current application similarly does not recite a technical solution to a technical problem because the problem disclosed in the is the need to predict and provide recommendations related to lending decisions based on the analysis of borrower related data, which is business problem, not a technical one. With respect to the “training” and “usage” of the model as claimed, the claimed subject matter similar to Recentive Analytics, Inc v Fox Corp, simply uses machine learning to perform a task. The court found that machine learning is now viewed as a common tool rather than a technological breakthrough. The current specification fails to clearly describe a technical problem or explain how the claimed invention improves the “functioning” of the technology, rather than the outcome of applying the technology for analysis. Furthermore, similar to Recentive, the “training” steps i) receiving vector input data ii) generate a predictive model iii) collecting data from databases, iv) storing collected data iv) testing and refining, iteratively the model based on stored and collected data v) training based on testing and refining the model, where the specification teaches the learning model trained using training data, makes clear that any suitable machine learning technique can be applied which allows data to be inputted and changed based on changes in data. According to Recentive “The requirements that the machine learning model be “iteratively trained” or dynamically adjusted in the Ma chine Learning Training patents do not represent a technological improvement… Iterative training using selected training material and dynamic adjustments based on real-time changes are incident to the very nature of machine learning. See, e.g., Opposition Br. 9 (“[U]sing a machine learning technique[] . . . necessarily includes [an] iterative[] training step . . . .” (internal quotation marks and citation omitted)); Transcript at 26:21–24 (“[T]he way machine learning works is the inputs are defined, the model is trained, and then the algorithm is actually updated and improved over time based on the input”).”. Accordingly, the examiner maintains that similar to Recentive the claimed ML limitations do not transform the claimed subject matter into patent eligibility. The rejection is maintained. In the remarks applicant points to the Ex parte Desjardins decision, arguing that the claim recite a technological improvement in the claim limitations and the specification itself. Specifically, applicant points to the claim language that states that the storage of data on the distributed ledger comprising cryptographic signing and time-stamping, results in storage reduction collection time when performing training of the model. Applicant’s argument is not persuasive. Applicant has not explained how data that is encrypted and time stamped that is stored reduces collection time. As discussed above, the collection of data is implemented without tying the retrieval/collection of data to the data encryption or time stamping. Data encrypted or time stamped, prior to use is common tool in machine learning data preparation for input into ML models. (see evidence provided Efficient CNN Building Blocks for Encrypted Data by Jain et al; Building machine learning models with encrypted data by Crockett). The rejection is maintained. In the remarks applicant argues the claimed limitations integrate any alleged abstract idea into a practical application with a particular machine. The claim limitations recite a “processor”, “a memory tangibly storing instructions”, as foundational elements. The claim recites specific technical infrastructure, a network over which data is acquired and assets are controlled. A “local database” that is searched to retrieve data, “a distributed ledger” on which data is collected, “predictive models”, “trained models” and lending verdicts stored, “ a set of peer nodes”, that validate data through associated chaincode, “a smart contract”, that controls data collection, “a machine learning model” that is executed to generate predictive models, “a large language model (LLM) chat bot” that enables natural language communication and “a secure communication chat channel” established over the network. Applicant argues these are not generic computer components performing generic functions but rather specific machines and technological infrastructure configured to perform the specific technological processes claims. The claim limitations recite operations occurring “over the network” in the limitations “acquire, over the network, user data…”, “search, over the network, a local database”, “establish, over the network, a secure communication chat channel” and “control, over the network, the transfer of the digital assets”. These limitations tie the claimed subject matter to network based computing infrastructure, not abstract concepts divorced from technological implementation. Applicant’s argument is not persuasive. According to MPEP 2106.05(d), a particular machine is more than just a generic computer, it must include details on the specific device and its components. In the case of Mackay Radio & Tel Co. v Radio Corp of America, the claim recited a particular type of antenna and included details as to the shape of the antenna and the conductions. The guidelines of the USPTO, further suggest that software inventions tied to a particular machine. This is because software can be integrated into specific machines or technological environments which can be seen as a “specific machine”. Factors to consider include whether the machine implements the steps of the process, whether the machine is integral to the claim, whether the involvement of the machine imposes meaningful limits on the claim’s scope and whether the machine is more than a generic computer performing generic functions. The claimed “network” in light of the specification and claimed limitations do not qualify as a particular machine as the network is not integral to the claim or imposes meaningful limits upon the scope of the claim. Rather the “network” merely provides conventional functions to transmits data. This is equally true with respect to the “local database” and “distributed ledger” as these storage elements merely store data which can be searched and/or from which data can be retrieved at a high level and do not perform any of the steps integral to the claim that impose meaningful limits to the scope of the claim. These machine storage elements do not recite performing steps of the process, but instead do no more than perform generic storage computer functions. With respect to the claimed “predictive models”, “trained models” and “machine learning models”, the specification and claims explicitly recite the models used for data analysis which amounts to no more than mere instructions to perform the abstract idea. The claimed “machine learning model “ is recited in the claims as performing generic operations of “providing the …vector as input…”, “generate…a predictive model…”, “train the predictive model” by “collecting data” from a set of databases via a smart contract”. The claim and specification do not recite or disclose the machine learning model operations that implement more than high level functions which an expected outcome and therefore, fails to impose meaningful limits to the scope of the claim. The claim limitations and specification recite the “predictive models” as being applied to control how assets are transferred without any details of technical implementation and therefore, fails to provide specific steps that impose meaningful limits to the scope of the claim. Rather the predictive model is generically applied to perform generic computer functions. The “trained model” is the “predictive model” accordingly the issues of the “predictive model” apply to the “trained model”. The “set of peer nodes” do not perform any of the steps integral to the claim that impose meaningful limits to the scope of the claim. Instead the “nodes” are merely applied for data validation without any details of technical implementation and therefore is generically applied to perform generic computer functions. This is equally true with respect to the “smart contract”. With respect to the LLM chatbot claimed, generic operations of such models include customer service (handle customer inquiries, provide information, assist with resolution and provide customer experience. (see article What’s in the Chatterbox?...by Okerlund; Chatbots in customer service: Their relevance and impact on service quality by Misischia et al). The claim limitations fail to provide specific steps of the process and the claimed LLM is not integral to the claim. Rather the LLM is applied generically for use in communication over a communication channel which also fail to provide any specific steps integral to the claim. The rejection is maintained. In the remarks applicant argues that the claim limitations integrate any alleged judicial exception through transformation of data. The claim limitations recite “convert …data from a first format to second format…”, “generate at least one vector feature based on …features from user data…”, Applicant’s argument is not persuasive. The claim limitations do not focus on the technical details of data formatting, but instead recite high level functions with expected outcomes which is broad enough to include any known means to format data from one format to another. The specification is equally generic merely discloses formatting data from one format to another merely recited outcomes without technical specificity (para 0076, para 0171, para 0177, para 0179). With respect to the “vector” data limitations, the specification is equally non-specific, lacking technical disclosure, instead focuses on the data acted upon (para 0009-0011, para 0050, para 0057, para 0068-0069, para 0073, para 0077-0078). For data, mere “manipulation” of basic mathematical constructs [i.e.,] the paradigmatic ‘abstract idea,’" has not been deemed a transformation. CyberSource v. Retail Decisions, 654 F.3d 1366, 1372 n.2, 99 USPQ2d 1690, 1695 n.2 (Fed. Cir. 2011) (quoting /n re Warmerdam, 33 F.3d 1354, 1355, 1360 (Fed. Cir. 1994). Whether the transformation is extra-solution activity or a field-of-use (/.e., the extent to which (or how) the transformation imposes meaningful limits on the execution of the claimed method steps). A transformation that contributes only nominally or insignificantly to the execution of the claimed method (e.g., in a data gathering step or in a field-of-use limitation) would not provide significantly more (or integrate a judicial exception into a practical application). Mayo, 566 U.S. at 76, 101 USPQ2d at 1967. The Supreme Court disagreed, finding that this step was only a field-of-use limitation and did not provide significantly more than the judicial exception. /d. See MPEP § 2106.05(g) & (h). The rejection is maintained. In the remarks applicant points to the limitation “generate, via…the ML model, a predictive model”, “train the predictive model “ and “execute the trained predictive model” which recite transformation of feature vectors into a newly created predictive model, the transformation of that model through training into a trained model with improved capabilities and transformation of input data through model execution into recommendations and verdicts. The claim further recites “modify, based on communication …via the chat bot, the recommendations. These limitations recite transformation of an initial recommendation into modified recommendations based on natural language communication. Accordingly the limitations are not mere data gathering and/or insignificant extra solution activity. Applicant’s argument is not persuasive and have already been addressed above. See response above, the rejection is maintained. In the remarks applicant points to the limitation “control, over the network, the transfer of …assets based on modified recommendations”. Applicant argues this limitation transforms the modified recommendation into a concrete action, the actual transfer of assets to accounts, resulting in real world application. Applicant’s argument is not persuasive. The limitations merely apply technology at a high level to perform the abstract idea and fail to impose meaningful limits upon the abstract concept of asset transfer in a transaction. The rejection is maintained. In the remarks applicant argues that under step 2B, the claimed subject matter provides significantly more than the alleged abstract idea, pointing to DDR Holdings. Specifically applicant argues the ordered combination of the limitations recite a specific technological workflow that integrates multiple technologies, network based data acquisition, format conversion, computations feature extraction, network based searching using extracted features, feature vector generation combining current and historical data, machine learning execution to generate predictive models, distributed ledger storage of the created models, smart contract data collection with peer node validation through chaincode, cryptographic signing and time-stamping data, iterative testing and refining of the model, training the model to perform asset control decisions, distributed ledger storage of the model, execution of the trained model to produce lending parameter, output lending verdicts, recording of verdicts in the ledger, training based on information in the ledger, retrieval and analysis of the data for subsequent request using the trained model, determination of recommendations, establish secure communications channels comprising LLM chat bot, NLP communication with users via the chatbot comprising an exchange of information related to the recommendations, modification of recommendations based on communication with users via the chat bot, network asset transfers based on modified recommendations. Applicant argues the specific ordered combination addresses multiple technical problems ensuring training data quality through peer data validation with chaincode reducing data collection time during iterative training through cryptographic signing and time-stamping in the ledger storage, enabling continuous model improvement through ledger based feed-back loops, ensuring model governance and auditability through ledger storage of modes and decisions and user refinement of decisions through LLM chat bot NLP interaction, achieving automated asset transfer based on ML predictions refined through user communication. Applicant argues the combination of parts is more than the sum of its parts and the cryptographic signing and time-stamping of data stored on the ledger enables reduction of collection time during training. The storage of the model on the ledger enables usage and training whereas the LLM chatbot communication enables modification of the recommendation. This process provides an overall technological solution. Applicant’s argument is not persuasive. First, except to the statement that the encrypting and time-stamping of data reduces collection time for retrieving data from storage for use in analysis of borrower related data, applicant has not identified what “multiple technical problems” are being addressed by the claim limitations when considered as a combination. The combination of parts when considered as a whole is directed toward collecting data using generic technology without technical details, processing data for analysis using generic technology at a high level with expected outcomes, analyzing the data using generic machine learning algorithms and outputting the results that are communicated with users using known technology at a high level, wherein the users provide feedback for modification of the results that are applied for further analysis and then outputted and applied for use in transferring assets in a transaction. With respect to the “reduce time” for data collection. This has already been discussed above. The claim limitations and specification does not support or explain how encrypted and time stamped data stored reduces the retrieval/collection of data stored for use in further analysis. The specification and claims do not recite any technical process that is impacted because the data is encrypted and timestamped. The rejection is maintained. In the remarks applicant points to Enfish, arguing the claim limitations similarly provide an improvement to the ways computers operate. Applicant argues that the training data collection via an executed smart contract corresponding to a chaincode associated with s set of peer nodes that validates data from the set of database”, data storage on “distributed ledger”, the storage cryptographically signing and time-stamping the data, such that the storage reducing collection time which performing training of the model” model storage “in a distributed ledger for subsequent usage and training” and recommendation modification “based on communication and via chat bot” the “chat channel comprising …LLM chatbot” achieve technological improvements. Applicant’s argument is not persuasive. Applicant has not identified what underlying technology is improved in the combination of generic operations that is analogous to Enfish. As discussed above, the ordered combination of operations merely applies technology to perform the abstract idea. With respect to the “reducing of collection time”, this has already been addressed above. See response above, the rejection is maintained. In the remarks applicant points to McRO where the use of rules achieved an improved technological result. Applicant argues the current limitations similarly recite specific rules and processes convert “first format to second format”, data collection “via executed of smart contract…corresponding to chaincode associated with peer nodes that validates data”, storage comprise “cryptographically signing and time-stamping data”, the model …trained “to perform autonomous …asset control decisions”, “communication “via secure communication chat channel”, with user “via the chat bot”, modification “based on the communication and via the chat bot” which achieves improved technical results. Applicant’s argument is not persuasive. Applicant has not identified what rules in the current limitations improve the capability of the underlying technology. As discussed above, the ordered combination of operations merely applies technology to perform the abstract idea. With respect to the “reducing of collection time”, this has already been addressed above. See response above, the rejection is maintained. In the remarks applicant points to the Visual Memory decision where the court held that specific structure enabled improved computer memory systems to be non-abstract. Applicant argues that the current limitation similarly recite specific structures “distributed ledger” on which data, modes and verdicts are stored, a “smart contract” corresponding to a chaincode associated with peer nodes, “large language model (LLM) chat bot” within “secure communication chat channel”, “predictive model being newly created ML model “ for controlling how assets are transferred, that enable improve functionality identified in the claim “reducing time when performing the training of the predictive model”. Applicant’s argument is not persuasive. The limitations “large language model (LLM) chat bot” within “secure communication chat channel”, “predictive model being newly created ML model “ for controlling how assets are transferred, have nothing to do with the data retrieved argued. With respect to the argument of improving technology by “reducing time when performing the training of the predictive model”, this argument has already been addressed above, see response above. In the remarks applicant points to Finjan which found patent eligibility with improvement to computer functionality. Applicant argues the current limitations are not results oriented but instead specifies ow the results are accomplished through claim limitations recited (smart contract execution with chaincode and peer node validation for data collection, cryptographic signing and time-stamping of distributed ledger storage to reduce collection time). Distributed ledger storage of models for subsequent usage and training, LLM chatbot communication to enable recommendation modification and control based on modification recommendations. These limitation specify technological mechanism to achieve reducing collection time when performing the training. With respect to Finjan arguments this argument as been addressed above, see response above. The limitations “large language model (LLM) chat bot” within “secure communication chat channel”, “predictive model being newly created ML model “ for controlling how assets are transferred, have nothing to do with the data retrieved argued. With respect to the argument of improving technology by “reducing time when performing the training of the predictive model”, this argument has already been addressed above, see response above. In the remarks applicant argues that the claim limitations reflect improvements through own language, specifically, format conversion, smart contract limitations, distributed ledger storage comprising cryptographically signing and time-stamping data, predictive model generation, chatbot limitation modification, control modification limitation based on modified recommendation. Applicant argues that these limitations satisfy the Ex parte Desjardins requirement decision where patent eligibility can be found when the specification supports the claim limitations. Applicant’s argument is not persuasive. These arguments with respect to Desjardins decision and the limitations presented have already been addressed above, see response above, the rejection is maintained. In the remarks applicant argues the previous Office Action is inconsistent with USPTO guidance and court decision precedent. Applicant recites Desjardins with respect to AI innovations, where rejections that the express the claims recite generic machine learning concepts is inconsistent with the Desjardins decision. Applicant recites the limitations, arguing the limitations demonstrate that the claim is not directed toward machine learning in the abstract but to specific technological implementation that achieve specific improvements. Applicant points to MPEP 2106 for directions of evaluating claims related to machine learning and the recited technology of the claim. Applicant’s argument is not persuasive. Applicant has not identified what analysis of the previous Office action is inconsistent with the Desjardins decision or MPEP guidance. The rejection is maintained. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-9, 21-30 and 32-34 are rejected under 35 U.S.C. § 101 because the instant application is directed to non-patentable subject matter. Specifically, the claims are directed toward at least one judicial exception without reciting additional elements that amount to significantly more than the judicial exception. The rationale for this determination is in accordance with the guidelines of USPTO, applies to all statutory categories, and is explained in detail below. In reference to Claims 1-9 and 32-34: STEP 1. Per Step 1 of the two-step analysis, the claims are determined to include a system, as in independent Claim 1 and the dependent claims. Such systems fall under the statutory category of "machine." Therefore, the claims are directed to a statutory eligibility category. STEP 2A Prong 1. The claimed invention is directed to an abstract idea without significantly more. The functions of system claim 1 include 1) acquire data 2) convert data from one format to another 3) analyze data and determine data features 4) search database to retrieve data 5) generate feature vector based on plurality of features and historical data 6) provide feature vector as input to ML model for executing model 7) generate predictive model comprising storing the model 8) train model by collecting data, storing collected data, testing/refining/iteratively the model based on collected/stored data 9) training model 10) storing model 11) execute predictive model producing adjustable lending parameter 12) output lending verdict 13) record lending verdict result 14) train the model 15) retrieve information related to lending verdict 16) execute analyze model to analyze data 17) determine recommendation 18) establish chat channel (19) communicate with user (20) modify the recommendation 21) control asset transfer. The claimed limitations which under its broadest reasonable interpretation, covers performance of commercial interactions. When considered as a whole the claimed subject matter is directed toward receiving and manipulating data for analysis for a lending process. Such concepts can be found in the abstract category of fundamental economic activity and commercial/legal interactions. This is because loan decisions are “long prevalent in our system of commerce (see Bilski v. Kappos, 561 U.S. 593, 611, 95 USPQ2d 1001, 1010 (2010)” which held “fundamental economic practice long prevalent in our system of commerce’” and also as “a building block of the modern economy”) (citation omitted); Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1313, 120 USPQ2d 1353, 1356 (Fed. Cir. 2016) (“The category of abstract ideas embraces ‘fundamental economic practice[s] long prevalent in our system of commerce,’ … including ‘longstanding commercial practice[s]’”). The claim limitation for lending decisions that are granted/declined and recorded are found in the sub-category of Commercial/legal interactions, “where the interaction is an agreement in the form of contracts, is found in buySAFE, Inc. v. Google, Inc., 765 F.3d. 1350, 112 USPQ2d 1093 (Fed. Cir. 2014). The agreement at issue in buySAFE was a transaction performance guaranty, which is a contractual relationship. 765 F.3d at 1355, 112 USPQ2d at 1096.” These concepts are enumerated in Section I of the 2019 revised patent subject matter eligibility guidance published in the federal register (84 FR 50) on January 7, 2019) is directed toward abstract category of methods of organizing human activity. STEP 2A Prong 2: The identified judicial exception is not integrated into a practical application because the additional elements recited in the claim beyond the abstract idea fail to provide indications of patent eligible subject matter. The additional elements include a system processor, a network, a machine learning model, a smart contract, a distributed ledger, a LLM chat bot and communication chat channel. The generic system processor is recited to perform the operations of “acquire data”, “collect data via smart contract”, “store data”, “output lending verdict”, “record verdict”, “retrieve data” at a high level of generality lacking technical disclosure and thus is insignificant extra solution activity. (The court stated that the claims describe steps of recording, administration and archiving of digital images, and found them to be directed to the abstract idea of classifying and storing digital images in an organized manner. TLI Communications, 823 F.3d at 612, 118 USPQ2d at 1747; see MPEP 2106.05(d)II; MPEP 2016.05(g)) The generic system processor to perform the operations at a high level lacking technical disclosure of “convert data from first format to second format” data manipulation and organization; For data, mere “manipulation” of basic mathematical constructs [i.e.,] the paradigmatic ‘abstract idea,’" has not been deemed a transformation. CyberSource v. Retail Decisions, 654 F.3d 1366, 1372 n.2, 99 USPQ2d 1690, 1695 n.2 (Fed. Cir. 2011) (quoting In re Warmerdam, 33 F.3d 1354, 1355, 1360 (Fed. Cir. 1994). Whether the transformation is extra-solution activity or a field-of-use (i.e., the extent to which (or how) the transformation imposes meaningful limits on the execution of the claimed method steps). A transformation that contributes only nominally or insignificantly to the execution of the claimed method (e.g., in a data gathering step or in a field-of-use limitation) would not provide significantly more (or integrate a judicial exception into a practical application). Mayo, 566 U.S. at 76, 101 USPQ2d at 1967. The generic system processor to perform the operations at a high level lacking technical details of implementation of “analyze data”, “testing, and refining iteratively…the model”, “determine recommendation” is merely applying technology to analyze data in order to determine recommendations and lending decisions. The generic system processor to perform the operation at a high level of generality of “search database based on query” which is merely applying a processor to perform a transaction related step for searching for financial related data. The generic system processor to perform the operations at a high level lacking technical details of implementation of the operation of “generate a feature vector”, “train model”, “execute machine learning model” which is merely applying technology to program a model for use in analyzing financial data. The operations are not directed toward improvement of machine learning training technology or machine learning technology or to solve a problem rooted in machine learning technology. The generic system processor to perform the operations at a high level lacking technical details of implementation of the operation of “establishing secure communication chat channel” over a network merely applies technology for use in communication. The generic system processor to perform the operations at a high level lacking technical details of implementation of the operation of the operation “control transfer of assets” which is merely applying technology to perform a transaction process. The generic machine learning model to perform the operation of “produce lending parameter” applying technology at a high level of generality to implement a transaction related output without technical details. The additional element “network” merely provides the technical environment for communication and transmission of data. The claim limitations are silent with respect to any specific operations of the network beyond the general field of use for communication. The additional element “chat bot” comprising a LLM model fails to provide any processes where the LLM model performs any functions. The chat bot is merely applied to communicate with users for exchange of information and for use in modifying a recommendation. The functions of the additional elements as discussed above are is recited at a high-level of generality such that it amounts to no more than applying the exception using generic computer components. Taking the claim elements separately, the operation performed by the system at each function of the process is purely in terms of results desired and devoid of implementation of details. This is true with respect to the limitations “generate at least one feature vector based on the plurality of features and the location historical user related data”, “train…model” comprising “collecting data”, “storing data and “testing and refining, iteratively” and “execute ML model” where “execution” is “providing …feature vector as input to ML model”, “establish communication channel” and “as the claimed limitations do not provide any technical details on how as a technical process to perform the recited functions. The limitation is not directed toward improving ML technology but instead to apply ML technology in order to predict lending verdicts. Technology is not integral to the process as the claimed subject matter is so high level that any generic programming could be applied and the functions could be performed by any known means. Furthermore, the claimed functions do not provide an operation that could be considered as sufficient to provide a technological implementation or application of/or improvement to this concept (i.e. integrated into a practical application). When the claims are taken as a whole, as an ordered combination, the combination of limitations 1-2 are directed toward collecting and translating data for analysis – mere data collection and manipulation. For data, mere “manipulation” of basic mathematical constructs [i.e.,] the paradigmatic ‘abstract idea,’" has not been deemed a transformation. A transformation that contributes only nominally or insignificantly to the execution of the claimed method (e.g., in a data gathering step or in a field-of-use limitation) would not provide significantly more (or integrate a judicial exception into a practical application). Mayo, 566 U.S. at 76, 101 USPQ2d at 1967. The combination of limitations 1-2 and 3 are directed toward analyzing the collected data for a business process- mere automation of a business process. The combination of limitations 1-3 and 4-7 are directed toward collecting, analyzing, retrieving data that is inputted in a machine for analyzing lending parameters for controlling and transferring assets– applying analysis on collected/retrieved data using vectors for data analysis for business practice. The combination of limitations 1-7 and 8-12 is directed toward training a model using financial data for use in analyzing financial data to output a lending verdict. The combination of limitations of 1-12 and 13-16 is directed toward training a model using data retrieved to analyze financial data- applying technology at a high level to analyze financial data amounting to mere instructions to perform the analysis (i.e. automating manual and mental processes). The combination of limitations 13-16 and 17-20 is directed toward outputting the result of limitations 13-16 to a user using a communication system for conversations- applying technology to perform insignificant extra solution activity. The combination of limitations 13-20 and 21 is directed performing a asset transfer process based on the result limitations 1-20. The combinations of parts is not directed toward any technical process or technological technique or technological solution to a problem rooted in technology but rather directed toward collecting, manipulating and providing data that is applied in a model that is used to output lending decision and in response to the request for asset transfer control access to assets based on lending decision output. In addition, when the claims are taken as a whole, as an ordered combination, the combination of steps not integrate the judicial exception into a practical application as the claim process fails to impose meaningful limits upon the abstract idea. This is because the claimed subject matter as a whole is directed toward generating vectors for a model in order to generate lending parameters and output lending verdicts. This is because the claimed subject matter fails to provide additional elements or combination or elements to apply or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. The functions recited in the claims recite the concept of retrieving, organizing and manipulating financial related data generating data vectors for a model that is applied to predict lending verdicts funds which is a process directed applying technology for a business practice. The focus of the claims is not on a specific improvement in relevant technology or on a process, but instead qualifies as an “abstract idea” for which computers are invoked merely as a tool. Here, it is clear from the Specification (para 0009-0010, para 0033, para 0037, para 0039, para 0050, para 0057) is directed toward providing a system for loan processing where a model is applied in order to generate a predictive lending module. Claim 1 focuses on the manipulation of the data obtain that is then applied in a model for predicting lending verdicts (an abstract idea), and not on an improvement to technology and/or a technical field. The Specification is titled “System and Method for AI-Based Loan Processing,” and discloses, in the Background section, that other loan processing are directed toward addressing various aspect of loan processing based on borrower extracted data, processing and automation, but that these aspects do not process applications using predictive loan approval recommendations generated by AI engines (Spec. 6). The Specification describes that the focus of the invention is for automated loan processing based on predictive analytics of data (Spec 7). There is no indication in the claim language that the structure and/or the manner in which a computer system operates is changed in any way beyond its ordinary capacity. Nor does the analysis find any such indication elsewhere in the record. The Specification describes the challenges associated with loan approvals that could be addressed by using predictive loan approval recommendations generated. (Spec 6). With respect to the “generate…feature vector” or “provide …vector feature to ML module”, the claim provides no technical details regarding how the “generate” operation is performed. Instead, similar to the claims at issue in Intellectual Ventures I LLC v. Capital One Financial Corp., 850 F.3d 1332 (Fed. Cir. 2017), “the claim language . . . provides only a result-oriented solution with insufficient detail for how a computer accomplishes it. Our law demands more.” Intellectual Ventures, 850 F.3d at 1342 (citing Elec. Power Grp. LLC v. Alstom, S.A., 830 F.3d 1350, 1356 (Fed. Cir. 2016)). The integration of elements do not improve upon technology or improve upon computer functionality or capability in how computers carry out one of their basic functions. The integration of elements do not provide a process that allows computers to perform functions that previously could not be performed. The integration of elements do not provide a process which applies a relationship to apply a new way of using an application. The instant application, therefore, still appears only to implement the abstract idea to the particular technological environments apply what generic computer functionality in the related arts. The steps are still a combination made to collect, manipulate, generate vector data for modeling lending verdicts and does not provide any of the determined indications of patent eligibility set forth in the 2019 USPTO 101 guidance. The additional steps only add to those abstract ideas using generic functions, and the claims do not show improved ways of, for example, an particular technical function for performing the abstract idea that imposes meaningful limits upon the abstract idea. Moreover, Examiner was not able to identify any specific technological processes that goes beyond merely confining the abstract idea in a particular technological environment, which, when considered in the ordered combination with the other steps, could have transformed the nature of the abstract idea previously identified. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. STEP 2B; The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because as discussed above with respect to concepts of the abstract idea into a practical application. The additional elements recited in the claim beyond the abstract idea include a system comprising a processor to perform the operations of “acquire …data”, “convert …data”, “analyze…data”, “search…database to retrieve data…”, “generate ..feature vector on data features”, “generate predictive model”, “train model”, “store model, ““output the results of the analysis”, “execute a ML model”, “input feature vector data to the ML module applied to produce lending parameters”, “search over a network and retrieve data” , “record the result”, “train…model”, “retrieve user data”, “determine recommendation”, “modify recommendation” and “control assets related to lending decision in response to asset transfer request”. The ML module is trivial mention and lacks technical disclosure with respect to the generate vector feature and produce lending parameter and generating the model used. The additional element LLM model chat is trivial mention lacking technical disclosure applied for communication chat channel that is established and applied for communication lacks technical disclosure, merely applying technology to perform insignificant extra solution activity. Taking the claim elements separately, the function performed by the computer at each step of the process is purely conventional. Using a computer processor to acquire and parse data, to query a database to retrieve data --are some of the most basic functions of a computer. When the claims are taken as a whole, as an ordered combination, the combination of steps does not add “significantly more” by virtue of considering the steps as a whole, as an ordered combination. All of these computer functions are generic, routine, conventional computer activities that are performed only for their conventional uses. See Elec. Power Grp. v. Alstom S.A., 830 F.3d 1350, 1353 (Fed. Cir. 2016). Also see In re Katz Interactive Call Processing Patent Litigation, 639 F.3d 1303, 1316 (Fed. Cir. 2011). Absent a possible narrower construction of the terms “acquire …data”, “search…database to retrieve data…”, “generate ..feature vector on features and data”, “provide feature vector to the ML module to generate a model”... are functions can be achieved by any general purpose computer without special programming. None of these activities are used in some unconventional manner nor do any produce some unexpected result. Applicants do not contend they invented any of these activities. In short, each step does no more than require a generic computer to perform generic computer functions. As to the data operated upon, "even if a process of collecting and analyzing information is 'limited to particular content' or a particular 'source,' that limitation does not make the collection and analysis other than abstract." SAP America, Inc. v. Invest Pic LLC, 898 F.3d 1161, 1168 (Fed. Cir. 2018). Considered as an ordered combination, the computer components of Applicant’s claimed functions add nothing that is not already present when the steps are considered separately. The sequence of data reception-analysis modification-transmission is equally generic and conventional. See Ultramercial, Inc. v. Hulu, LLC, 772 F.3d 709, 715 (Fed. Cir. 2014) (sequence of receiving, selecting, offering for exchange, display, allowing access, and receiving payment recited as an abstraction), Inventor Holdings, LLC v. Bed Bath & Beyond, Inc., 876 F.3d 1372, 1378 (Fed. Cir. 2017) (sequence of data retrieval, analysis, modification, generation, display, and transmission), Two-Way Media Ltd. v. Comcast Cable Communications, LLC, 874 F.3d 1329, 1339 (Fed. Cir. 2017) (sequence of processing, routing, controlling, and monitoring). The ordering of the steps is therefore ordinary and conventional. The analysis conclude that the claims do not provide an inventive concept because the additional elements recited in the claims do not provide significantly more than the recited judicial exception. According to 2106.05 well-understood and routine processes to perform the abstract idea is not sufficient to transform the claim into patent eligibility. As evidence the examiner provides: The specification discloses the “generate…feature vector” at a high level and lacks technical disclosure (see para 0009-0011, para 0050, para 0057, para 0059, para 0069, para 0073, para 0077-0078). Para 0024 discloses that the order of processes is not particular “Thus, for example, any sequence(s) and/or temporal order of steps of various processes or methods that are described herein are illustrative and not restrictive. Accordingly, it should be understood that, although steps of various processes or methods may be shown and described as being in a sequence or temporal order, the steps of any such processes or methods are not limited to being carried out in any particular sequence or order, absent an indication otherwise. Indeed, the steps in such processes or methods generally may be carried out in various different sequences and orders while still falling within the scope of the present invention. Para 0086 discloses the readable medium -the computer computer-readable instructions may reside in random access memory ("RAM"), flash memory, read-only memory ("ROM"), erasable programmable read-only memory ("EPROM"), electrically erasable programmable read-only memory ("EEPROM"), registers, hard disk, a removable disk, a compact disk read-only memory ("CD-ROM"), or any other form of storage medium known in the art. [0087] An exemplary storage medium may be coupled to the processor such that the processor may read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application specific integrated circuit ("ASIC"). In the alternative embodiment, the processor and the storage medium may reside as discrete components. For example, FIG. 5 illustrates an example computing device (e.g., a server node) 500, which may represent or be integrated in any of the above-described components, etc. [0095] Consistent with an embodiment of the disclosure, the aforementioned CPU 520, the bus 530, the memory unit 550, a PSU 550, and the plurality of 1/0 units 560 may be implemented in a computing device, such as computing device 500. Any suitable combination of hardware, software, or firmware may be used to implement the aforementioned units. For example, the CPU 520, the bus 530, and the memory unit 550 may be implemented with computing device 500 or any of other computing devices 500, in combination with computing device 500. The aforementioned system, device, and components are examples and other systems, devices, and components may comprise the aforementioned CPU 520, the bus 530, the memory unit 550, consistent with embodiments of the disclosure. [0096] At least one computing device 500 may be embodied as any of the computing elements illustrated in all of the attached figures, including the LS node 102 (FIG. 2). A computing device 500 does not need to be electronic, nor even have a CPU 520, nor bus 530, nor memory unit 550. The definition of the computing device 500 to a person having ordinary skill in the art is "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." Any device which processes information qualifies as a computing device 500, especially if the processing is purposeful. The specification makes clear that the “generate…predictive model” and “train …model” is generic as the specification and claims lack technical disclosure and is merely applied in the claim as a generic computer component to perform the lending prediction analysis. The specification discloses the “training” and “generating” of the model at a high level lacking technical disclosures, instead the specification focuses on the data acted upon and the application of the model to analyze financial data for a financial result. [0010]…” generating at least one feature vector based on the plurality of features and the local historical borrowers' -related data; and providing the at least one feature vector to the ML module configured to generate a predictive model for producing at least one lending parameter for generation of the borrower-related lending verdict for the at least one lender entity node” [0011]… providing the at least one feature vector to the ML module configured to generate a predictive model for producing at least one lending parameter for generation of the borrower-related lending verdict for the at least one lender entity node. [0050]… The LS node 102 may ingest the feature vector data into an AI/ML module 107. The AI/ML module 107 may generate a predictive model(s) 108 based on the feature vector data to predict lending parameters for automatically generating a lending verdict and/or lending recommendations to be provided to the lender entities 113 ( e.g., loan officers, underwriters, other practitioners, etc.). The lending parameters and/or loan risk assessment parameters may be further analyzed by the LS node 102 prior to generation of the loan verdict…. [0057]… The LS node 102 may ingest the feature vector data into an AI/ML module 107. The AI/ML module 107 may generate a predictive model(s) 108 based on the feature vector data to predict lending parameters for automatically generating a lending verdict and/or lending recommendations to be provided to the lender entities 113 ( e.g., loan officers, underwriters, other practitioners, etc.)…. [0060] The AI/ML module 107 may generate a predictive model(s) 108 to predict the lending verdict and/ or lending recommendation parameters for the borrower 111 in response to the specific relevant pre-stored borrowers' -related data acquired from the blockchain 110 ledger 109. [0064] The AI/ML module 107 may generate a predictive model(s) 108 based on the received borrower-related data 202 and the borrowers' -related data provided by the LS node 102. As discussed above, the AI/ML module 107 may provide predictive outputs data in the form of lending parameters for automatic generation of landing verdict and/ or landing recommendations for the lender entities 113 (see FIG. 1B). [0069] The processor 204 may fetch, decode, and execute the machine-readable instructions 222 to provide the at least one feature vector to the ML module 107 configured to generate a predictive model 108 for producing at least one lending parameter for generation of the borrower-related lending verdict for the at least one lender entity node… After the lone is closed, the related documents may be converted into unique secure NFT assets to be recorded on the blockchain to be used for lending model training. [0073]… the processor 204 may provide the at least one feature vector to the ML module configured to generate a predictive model for producing at least one lending parameter for generation of the borrower-related lending verdict for the at least one lender entity node. [0079] In one disclosed embodiment, the lending parameters' model may be generated by the AI/ML module 107 that may use training data sets to improve accuracy of the prediction of the lending parameters for the lender entities 113 (FIG. 1A)… [0082]… The blockchain 110 can be used to significantly improve both a training process 402 of the machine learning model and the lending parameters' predictive process 40 5 based on a trained machine learning model… [0083] This can significantly reduce the collection time needed by the host platform 420 when performing predictive model training. For example, using smart contracts, data can be directly and reliably transferred straight from its place of origin ( e.g., from the LS node 102 or from borrowers' databases 103 and 106 depicted in FIGs. 1A -1B) to the blockchain 110. By using the blockchain 110 to ensure the security and ownership of the collected data, smart contracts may directly send the data from the assets to the entities that use the data for building a machine learning model… [0084] Furthermore, training of the machine learning model on the collected data may take rounds of refinement and testing by the host platform 420. Each round may be based on additional data or data that was not previously considered to help expand the knowledge of the machine learning model. In 402, the different training and testing steps (and the data associated therewith) may be stored on the blockchain 110 by the host platform 420. Each refinement of the machine learning model ( e.g., changes in variables, weights, etc.) may be stored on the blockchain 110. This, advantageously, provides verifiable proof of how the model was trained and what data was used to train the model. Furthermore, when the host platform 420 has achieved a finally trained model, the resulting model itself may be stored on the blockchain 110. [0085] After the model has been trained, it may be deployed to a live environment where it can make recommendation-related predictions/decisions based on the execution of the final trained machine learning model using the prediction parameters. In this example, data fed back from the asset 430 may be input into the machine learning model and may be used to make event predictions such as most optimal loan approval and loan scheduling parameters for the borrower based on the recorded borrower's data. Determinations made by the execution of the machine learning model ( e.g., lending verdict and lending recommendations, loan risk assessment data, etc.) at the host platform 420 may be stored on the blockchain 110 to provide auditable/verifiable proof. As one non-limiting example, the machine learning model may predict a future change of a part of the asset 430 (the lending recommendation parameters - i.e., assessment of risk of unsuccessful loan approval). The data behind this decision may be stored by the host platform 420 on the blockchain 110. The specification discloses the Chat communication in its application for financial process in communication with users lacking technical details, providing a laundry list of known communication protocols that can be applied for communication with a user in a financial process. . [0034] This process includes transparent lending recommendations/verdict mechanism that may be coupled with a secure communications chat channel (implemented over a blockchain network) which supports both parties to set and agree on the loan processing and terms with each other. In one embodiment, the chat channel may be implemented using a chat Bot. [0038] In one embodiment, borrower calls may be recorded, transcribed and processed by an AI-based chat bot configured to answer questions and also give feedback and relay the feedback from the lending server to the borrowers in an automated fashion. The responses may be based on other borrowers in similar situations across similar industries with similar requests and similar loan types. [0045] Referring to FIG. 1A, the example network 100 includes the lending server (LS) node 102 connected to a cloud server node(s) 105 over a network. The LS node 102 is configured to host an AI/ML module 107. The LS node 102 may receive borrower data from a borrower 111. The LS node 102 may receive a call data related to communication between the borrower 111 and responding entity that may be implemented as chat bot (not shown). [0046] The call data may have language indicator metadata representing the language of the borrower used during the call. The call data may refer to any communications such as borrower communications with the lending entities (i.e., loan officers, underwriters, agents, other practitioners, etc.) directly or via a chatbot application. In one embodiment, the call data may be processed by the LS node 102 using the pre-trained large language models. The LS node 102 may derive the language indicator and parse out the call data based on the language indicator metadata. In other words, the key features of the call data may be, advantageously, derived from the call data based on the language of the call or email or other communication. [0052] Referring to FIG. 1B, the example network 100' includes the lending server (LS) node 102 connected to a cloud server node(s) 105 over a network. The LS node 102 is configured to host an AI/ML module 107. The LS node 102 may receive borrower data from a borrower 111. The LS node 102 may receive a call data related to communication between the borrower 111 and responding entity that may be implemented as a chat bot (not shown). [0053] The call data may have language indicator metadata representing the language of the borrower used during the call. In one embodiment, the call data may be processed by the LS node 102 using the pre-trained large language models. The LS node 102 may derive the language indicator and parse out the call data based on the language indicator metadata. In other words, the key features of the call data may be, advantageously, derived from the call data based on the language of the call. [0062] Referring to FIG. 2, the example network 200 includes the LS node 102 connected to the borrower entity 101 (FIGs. 1A-B) to receive borrower data 201. The LS node 102 may be connected to a chat bot (not shown) to receive call data. [0076] With reference to FIG. 3B, at block 314, the processor 204 may receive borrower call data from a chat bot associated with the at least on lender entity node, the call data comprising data generated during borrower's communication with the chat bot; derive a language metadata from the call data; and parse the call data based on the language metadata to derive a plurality of key features. At block 316, the processor 204 may retrieve remote historical borrowers' -related data from at least one remote borrowers' database based on the local historical borrowers' -related data, wherein the remote historical borrowers' -related data is collected at locations associated with a plurality of lender entities affiliated with financial institutions. Note that the call data may be audio data and/or textual data including emails, messages, voice-to-text converted communications, etc [00143] Two nodes can be networked together, when one computing device 500 is able to exchange information with the other computing device 500, whether or not they have a direct connection with each other. The communication sub-module 562 supports a plurality of applications and services, such as, but not limited to World Wide Web (WWW), digital video and audio, shared use of application and storage computing devices 500, printers/scanners/fax machines, email/online chat/instant messaging, remote control, distributed computing, etc. The network may comprise a plurality of transmission mediums, such as, but not limited to conductive wire, fiber optics, and wireless. The network may comprise a plurality of communications protocols to organize network traffic, wherein application-specific communications protocols are layered, may be known to a person having ordinary skill in the art as carried as payload, over other more general communications protocols. The plurality of communications protocols may comprise, but not limited to, IEEE 802, ethernet, Wireless LAN (WLAN / Wi-Fi), Internet Protocol (IP) suite ( e.g., TCP /IP, UDP, Internet Protocol version 5 [1Pv5], and Internet Protocol version 6 [1Pv6]), Synchronous Optical Networking (SONET)/Synchronous Digital Hierarchy (SDH), Asynchronous Transfer Mode (ATM), and cellular standards (e.g., Global System for Mobile Communications [GSM], General Packet Radio Service [GPRS], Code-Division Multiple Access [CDMA], and Integrated Digital Enhanced Network [IDEN]). US Pub No. 2022/0291666 A1 by Cella et al -para 01144 wherein the prior at teaches applying conventional blockchain technology for smart contracts, data share, manage access control. US Pub No. 2022/0237646 A1 by Michel et al-para 0022, para 0055 wherein the prior art teaches in background context attribute vectors applied to data; US Pub No. 2022/0061236 A1 by Guan et al -para 0319 wherein the prior art teaches using as a storage device commonly used blockchain. US Pub No. 2019/0205773 A1 by Ackerman et al-para 0549 wherein the prior art teaches it is known in the art for applying popular mainstream distributive ledger architectures. The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network) ii. Performing repetitive calculations, Flook, 437 U.S. at 594, 198 USPQ2d at 199 (recomputing or readjusting alarm limit values); Bancorp Services v. Sun Life, 687 F.3d 1266, 1278, 103 USPQ2d 1425, 1433 (Fed. Cir. 2012) ("The computer required by some of Bancorp’s claims is employed only for its most basic function, the performance of repetitive calculations, and as such does not impose meaningful limits on the scope of those claims."); iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93 With respect to blockchain technology with smart contracts as evidence the examiner provides: NPL article “Blockchain Development History Overview” by the blockchainhistory.com Smart Contract Era (2013-2017) If Bitcoin proved the viability of decentralized currency, then Ethereum opened the era of "programmable blockchain," elevating blockchain from a simple "world ledger" to a "world computer." Important Milestones: End of 2013: Vitalik Buterin published the Ethereum whitepaper 2014: Ethereum crowdfunding, raising $18 million July 30, 2015: Ethereum mainnet launched Smart Contracts: Self-executing code contracts DApps: Rise of decentralized applications Blockchain History Timeline: From Inception to Future by Klever- discloses expanding blockchain technology with the writing of smart contract in 2015; discloses Klever blockchain introduces smart contracts in decentralized Apps With respect to LLM model chatbot, evidence that such application of this technology is well understood is provided in the article “History and Evolution of LLMs” by GeeksforGeeks “ The First Steps in NLP (1960s - 1990s) The journey of Large Language Models (LLMs) began in 1966 with ELIZA, a simple chatbot that mimicked conversation using predefined rules but lacked true understanding. By the 1980s, AI transitioned from manual rules to statistical models, improving text analysis. In the 1990s, Recurrent Neural Networks (RNNs) introduced the ability to process sequential data, laying the foundation for modern NLP. 2. The Rise of Neural Networks and Machine Learning (1997 - 2010) A breakthrough came in 1997 with Long Short-Term Memory (LSTM), which solved RNNs’ memory limitations, making AI better at understanding long sentences. By 2010, tools like Stanford’s CoreNLP helped researchers process text more efficiently. 3. The AI Revolution and the Birth of Modern LLMs (2011 - 2017) The AI revolution gained momentum in 2011 with Google Brain, which leveraged big data and deep learning for advanced language processing. In 2013, Word2Vec improved AI’s ability to understand word relationships through numerical representations. Then in 2017, Google introduced Transformers in “Attention is All You Need,” revolutionizing LLMs by making them faster, smarter, and more powerful. The instant application, therefore, still appears to only implement the abstract ideas to the particular technological environments using what is generic components and functions in the related arts. The claim is not patent eligible. The remaining dependent claims—which impose additional limitations—also fail to claim patent-eligible subject matter because the limitations cannot be considered statutory. In reference to claims 2-9 and 32-34 these dependent claim have also been reviewed with the same analysis as independent claim 1. Dependent claim 2 is directed toward receiving data, derive language metadata, parse data to derive key features- directed toward data collection, manipulation and analysis – a applying technology to analyze data- a directed toward business practice. Dependent claim 3 is directed toward retrieve data from a database-applying technology to retrieve data- directed toward business practice. Dependent claim 4 is directed toward generate feature vector based on features and historical data- directed toward data organization- applying technology to organize data- directed toward application in business using technology. Dependent claim 5 is directed toward generate profile data based on borrower data and key features- directed toward business practice. Dependent claim 6 is directed toward periodically monitor data to determine value deviates from value margin threshold- directed toward business practice. Dependent claim 7 is directed toward generate updated feature vector based on condition and generating lending verdict – data manipulation and directed toward a business practice. Dependent claim 8 is directed toward record lending parameter on blockchain ledger- directed toward applying technology to record data. Dependent claim 9 is directed toward retrieving lending parameter from blockchain responsive to consensus- applying technology to retrieve business data. Dependent claim 32 is directed toward training the model at a high level lacking technical disclosure (well understood and routine) and model based on transfer of assets responsive to request - applying technology for a business process. Dependent claim 33 is directed toward converting documents into NFT assets – using technology to represent ownership of assets (i.e. legal contracts/deeds). Dependent claim 34 is directed toward analyzing recommendation- directed toward a business practice . The dependent claim(s) have been examined individually and in combination with the preceding claims, however they do not cure the deficiencies of claim 1. Where all claims are directed to the same abstract idea, “addressing each claim of the asserted patents [is] unnecessary.” Content Extraction & Transmission LLC v. Wells Fargo Bank, Nat 7 Ass ’n, 776 F.3d 1343, 1348 (Fed. Cir. 2014). If applicant believes the dependent claims 2-9, 21-30 and 32-34 are directed towards patent19eligible subject matter, they are invited to point out the specific limitations in the claim that are directed towards patent eligible subject matter. In reference to Claims 21-29: STEP 1. Per Step 1 of the two-step analysis, the claims are determined to include a method, as in independent Claim 21 and the dependent claims. Such methods fall under the statutory category of "process." Therefore, the claims are directed to a statutory eligibility category. STEP 2A Prong 1. The steps of method claim 21 corresponds to the functions of system claim 1. Therefore, claim 21 has been analyzed and rejected as being directed toward an abstract idea of the categories of concepts directed toward methods of organizing human activity previously discussed with respect to claim 1. STEP 2A Prong 2: The steps of method claim 21 corresponds to the functions of system claim 1. Therefore, claim 21 has been analyzed and rejected as failing to provide limitations that are indicative of integration into a practical application, as previously discussed with respect to claim 1. STEP 2B; The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because as discussed above with respect to concepts of the abstract idea into a practical application. The additional elements recited in claim 21 beyond the abstract idea include a server node to perform the operations corresponding to claim 1. The server node–is purely functional and generic. Nearly every server for implementing a method is capable of performing the basic computer functions -of “acquiring”, “converting data”, “analyzing”, “searching”, “generating vectors”, “input values into model”, “generating a model”, “training model…collecting data, storing data, testing/refining iteratively”, “training”, “storing”, “executing model and produce lending parameter”, “output result”, “record result”, “training model”, “retrieve data”, “executing model to analyze data”, “determining recommendation”, “establishing chat channel for communication”, “communicate via chat” and control assets in response to lending result analysis” - As a result, none of the hardware recited by the method claim offers a meaningful limitation beyond generally linking the use of the method to a particular technological environment, that is, implementation via computers. As evidence with respect to the conventional server claimed, the specification discloses – [0024]… “Thus, for example, any sequence(s) and/or temporal order of steps of various processes or methods that are described herein are illustrative and not restrictive. Accordingly, it should be understood that, although steps of various processes or methods may be shown and described as being in a sequence or temporal order, the steps of any such processes or methods are not limited to being carried out in any particular sequence or order, absent an indication otherwise. Indeed, the steps in such processes or methods generally may be carried out in various different sequences and orders while still falling within the scope of the present invention. [0045] Referring to FIG. 1A, the example network 100 includes the lending server (LS) node 102 connected to a cloud server node(s) 105 over a network. The LS node 102 is configured to host an AI/ML module 107. The LS node 102 may receive borrower data from a borrower 111. The LS node 102 may receive a call data related to communication between the borrower 111 and responding entity that may be implemented as chat bot (not shown).(see also para 0052), para 0088-0099. The specification makes clear that the “generate…predictive model” and “train …model” is generic as the specification and claims lack technical disclosure and is merely applied in the claim as a generic computer component to perform the lending prediction analysis. The specification discloses the “training” and “generating” of the model at a high level lacking technical disclosures, instead the specification focuses on the data acted upon and the application of the model to analyze financial data for a financial result. [0010]…” generating at least one feature vector based on the plurality of features and the local historical borrowers' -related data; and providing the at least one feature vector to the ML module configured to generate a predictive model for producing at least one lending parameter for generation of the borrower-related lending verdict for the at least one lender entity node” [0011]… providing the at least one feature vector to the ML module configured to generate a predictive model for producing at least one lending parameter for generation of the borrower-related lending verdict for the at least one lender entity node. [0050]… The LS node 102 may ingest the feature vector data into an AI/ML module 107. The AI/ML module 107 may generate a predictive model(s) 108 based on the feature vector data to predict lending parameters for automatically generating a lending verdict and/or lending recommendations to be provided to the lender entities 113 ( e.g., loan officers, underwriters, other practitioners, etc.). The lending parameters and/or loan risk assessment parameters may be further analyzed by the LS node 102 prior to generation of the loan verdict…. [0057]… The LS node 102 may ingest the feature vector data into an AI/ML module 107. The AI/ML module 107 may generate a predictive model(s) 108 based on the feature vector data to predict lending parameters for automatically generating a lending verdict and/or lending recommendations to be provided to the lender entities 113 ( e.g., loan officers, underwriters, other practitioners, etc.)…. [0060] The AI/ML module 107 may generate a predictive model(s) 108 to predict the lending verdict and/ or lending recommendation parameters for the borrower 111 in response to the specific relevant pre-stored borrowers' -related data acquired from the blockchain 110 ledger 109. [0064] The AI/ML module 107 may generate a predictive model(s) 108 based on the received borrower-related data 202 and the borrowers' -related data provided by the LS node 102. As discussed above, the AI/ML module 107 may provide predictive outputs data in the form of lending parameters for automatic generation of landing verdict and/ or landing recommendations for the lender entities 113 (see FIG. 1B). [0069] The processor 204 may fetch, decode, and execute the machine-readable instructions 222 to provide the at least one feature vector to the ML module 107 configured to generate a predictive model 108 for producing at least one lending parameter for generation of the borrower-related lending verdict for the at least one lender entity node… After the lone is closed, the related documents may be converted into unique secure NFT assets to be recorded on the blockchain to be used for lending model training. [0073]… the processor 204 may provide the at least one feature vector to the ML module configured to generate a predictive model for producing at least one lending parameter for generation of the borrower-related lending verdict for the at least one lender entity node. [0079] In one disclosed embodiment, the lending parameters' model may be generated by the AI/ML module 107 that may use training data sets to improve accuracy of the prediction of the lending parameters for the lender entities 113 (FIG. 1A)… [0082]… The blockchain 110 can be used to significantly improve both a training process 402 of the machine learning model and the lending parameters' predictive process 40 5 based on a trained machine learning model… [0083] This can significantly reduce the collection time needed by the host platform 420 when performing predictive model training. For example, using smart contracts, data can be directly and reliably transferred straight from its place of origin ( e.g., from the LS node 102 or from borrowers' databases 103 and 106 depicted in FIGs. 1A -1B) to the blockchain 110. By using the blockchain 110 to ensure the security and ownership of the collected data, smart contracts may directly send the data from the assets to the entities that use the data for building a machine learning model… [0084] Furthermore, training of the machine learning model on the collected data may take rounds of refinement and testing by the host platform 420. Each round may be based on additional data or data that was not previously considered to help expand the knowledge of the machine learning model. In 402, the different training and testing steps (and the data associated therewith) may be stored on the blockchain 110 by the host platform 420. Each refinement of the machine learning model ( e.g., changes in variables, weights, etc.) may be stored on the blockchain 110. This, advantageously, provides verifiable proof of how the model was trained and what data was used to train the model. Furthermore, when the host platform 420 has achieved a finally trained model, the resulting model itself may be stored on the blockchain 110. [0085] After the model has been trained, it may be deployed to a live environment where it can make recommendation-related predictions/decisions based on the execution of the final trained machine learning model using the prediction parameters. In this example, data fed back from the asset 430 may be input into the machine learning model and may be used to make event predictions such as most optimal loan approval and loan scheduling parameters for the borrower based on the recorded borrower's data. Determinations made by the execution of the machine learning model ( e.g., lending verdict and lending recommendations, loan risk assessment data, etc.) at the host platform 420 may be stored on the blockchain 110 to provide auditable/verifiable proof. As one non-limiting example, the machine learning model may predict a future change of a part of the asset 430 (the lending recommendation parameters - i.e., assessment of risk of unsuccessful loan approval). The data behind this decision may be stored by the host platform 420 on the blockchain 110. The specification discloses the Chat communication in its application for financial process in communication with users lacking technical details, providing a laundry list of known communication protocols that can be applied for communication with a user in a financial process. [0034] This process includes transparent lending recommendations/verdict mechanism that may be coupled with a secure communications chat channel (implemented over a blockchain network) which supports both parties to set and agree on the loan processing and terms with each other. In one embodiment, the chat channel may be implemented using a chat Bot. [0038] In one embodiment, borrower calls may be recorded, transcribed and processed by an AI-based chat bot configured to answer questions and also give feedback and relay the feedback from the lending server to the borrowers in an automated fashion. The responses may be based on other borrowers in similar situations across similar industries with similar requests and similar loan types. [0045] Referring to FIG. 1A, the example network 100 includes the lending server (LS) node 102 connected to a cloud server node(s) 105 over a network. The LS node 102 is configured to host an AI/ML module 107. The LS node 102 may receive borrower data from a borrower 111. The LS node 102 may receive a call data related to communication between the borrower 111 and responding entity that may be implemented as chat bot (not shown). [0046] The call data may have language indicator metadata representing the language of the borrower used during the call. The call data may refer to any communications such as borrower communications with the lending entities (i.e., loan officers, underwriters, agents, other practitioners, etc.) directly or via a chatbot application. In one embodiment, the call data may be processed by the LS node 102 using the pre-trained large language models. The LS node 102 may derive the language indicator and parse out the call data based on the language indicator metadata. In other words, the key features of the call data may be, advantageously, derived from the call data based on the language of the call or email or other communication. [0052] Referring to FIG. 1B, the example network 100' includes the lending server (LS) node 102 connected to a cloud server node(s) 105 over a network. The LS node 102 is configured to host an AI/ML module 107. The LS node 102 may receive borrower data from a borrower 111. The LS node 102 may receive a call data related to communication between the borrower 111 and responding entity that may be implemented as a chat bot (not shown). [0053] The call data may have language indicator metadata representing the language of the borrower used during the call. In one embodiment, the call data may be processed by the LS node 102 using the pre-trained large language models. The LS node 102 may derive the language indicator and parse out the call data based on the language indicator metadata. In other words, the key features of the call data may be, advantageously, derived from the call data based on the language of the call. [0062] Referring to FIG. 2, the example network 200 includes the LS node 102 connected to the borrower entity 101 (FIGs. 1A-B) to receive borrower data 201. The LS node 102 may be connected to a chat bot (not shown) to receive call data. [0076] With reference to FIG. 3B, at block 314, the processor 204 may receive borrower call data from a chat bot associated with the at least on lender entity node, the call data comprising data generated during borrower's communication with the chat bot; derive a language metadata from the call data; and parse the call data based on the language metadata to derive a plurality of key features. At block 316, the processor 204 may retrieve remote historical borrowers' -related data from at least one remote borrowers' database based on the local historical borrowers' -related data, wherein the remote historical borrowers' -related data is collected at locations associated with a plurality of lender entities affiliated with financial institutions. Note that the call data may be audio data and/or textual data including emails, messages, voice-to-text converted communications, etc [00143] Two nodes can be networked together, when one computing device 500 is able to exchange information with the other computing device 500, whether or not they have a direct connection with each other. The communication sub-module 562 supports a plurality of applications and services, such as, but not limited to World Wide Web (WWW), digital video and audio, shared use of application and storage computing devices 500, printers/scanners/fax machines, email/online chat/instant messaging, remote control, distributed computing, etc. The network may comprise a plurality of transmission mediums, such as, but not limited to conductive wire, fiber optics, and wireless. The network may comprise a plurality of communications protocols to organize network traffic, wherein application-specific communications protocols are layered, may be known to a person having ordinary skill in the art as carried as payload, over other more general communications protocols. The plurality of communications protocols may comprise, but not limited to, IEEE 802, ethernet, Wireless LAN (WLAN / Wi-Fi), Internet Protocol (IP) suite ( e.g., TCP /IP, UDP, Internet Protocol version 5 [1Pv5], and Internet Protocol version 6 [1Pv6]), Synchronous Optical Networking (SONET)/Synchronous Digital Hierarchy (SDH), Asynchronous Transfer Mode (ATM), and cellular standards (e.g., Global System for Mobile Communications [GSM], General Packet Radio Service [GPRS], Code-Division Multiple Access [CDMA], and Integrated Digital Enhanced Network [IDEN]). US Pub No. 2022/0291666 A1 by Cella et al -para 01144 wherein the prior at teaches applying conventional blockchain technology for smart contracts, data share, manage access control. US Pub No. 2022/0237646 A1 by Michel et al-para 0022, para 0055 wherein the prior art teaches in background context attribute vectors applied to data; US Pub No. 2022/0061236 A1 by Guan et al -para 0319 wherein the prior art teaches using as a storage device commonly used blockchain. US Pub No. 2019/0205773 A1 by Ackerman et al-para 0549 wherein the prior art teaches it is known in the art for applying popular mainstream distributive ledger architectures. The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network) ii. Performing repetitive calculations, Flook, 437 U.S. at 594, 198 USPQ2d at 199 (recomputing or readjusting alarm limit values); Bancorp Services v. Sun Life, 687 F.3d 1266, 1278, 103 USPQ2d 1425, 1433 (Fed. Cir. 2012) ("The computer required by some of Bancorp’s claims is employed only for its most basic function, the performance of repetitive calculations, and as such does not impose meaningful limits on the scope of those claims."); iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93 With respect to blockchain technology with smart contracts as evidence the examiner provides: NPL article “Blockchain Development History Overview” by the blockchainhistory.com Smart Contract Era (2013-2017) If Bitcoin proved the viability of decentralized currency, then Ethereum opened the era of "programmable blockchain," elevating blockchain from a simple "world ledger" to a "world computer." Important Milestones: End of 2013: Vitalik Buterin published the Ethereum whitepaper 2014: Ethereum crowdfunding, raising $18 million July 30, 2015: Ethereum mainnet launched Smart Contracts: Self-executing code contracts DApps: Rise of decentralized applications Blockchain History Timeline: From Inception to Future by Klever- discloses expanding blockchain technology with the writing of smart contract in 2015; discloses Klever blockchain introduces smart contracts in decentralized Apps With respect to LLM model chatbot, evidence that such application of this technology is well understood is provided in the article “History and Evolution of LLMs” by GeeksforGeeks “ The First Steps in NLP (1960s - 1990s) The journey of Large Language Models (LLMs) began in 1966 with ELIZA, a simple chatbot that mimicked conversation using predefined rules but lacked true understanding. By the 1980s, AI transitioned from manual rules to statistical models, improving text analysis. In the 1990s, Recurrent Neural Networks (RNNs) introduced the ability to process sequential data, laying the foundation for modern NLP. 2. The Rise of Neural Networks and Machine Learning (1997 - 2010) A breakthrough came in 1997 with Long Short-Term Memory (LSTM), which solved RNNs’ memory limitations, making AI better at understanding long sentences. By 2010, tools like Stanford’s CoreNLP helped researchers process text more efficiently. 3. The AI Revolution and the Birth of Modern LLMs (2011 - 2017) The AI revolution gained momentum in 2011 with Google Brain, which leveraged big data and deep learning for advanced language processing. In 2013, Word2Vec improved AI’s ability to understand word relationships through numerical representations. Then in 2017, Google introduced Transformers in “Attention is All You Need,” revolutionizing LLMs by making them faster, smarter, and more powerful. Method claim 11 steps corresponds to system functions claim 1. Therefore, claim 11 has been analyzed and rejected as failing to provide additional elements that amount to an inventive concept –i.e. significantly more than the recited judicial exception. Furthermore, as previously discussed with respect to claim 1, the limitations when considered individually, as a combination of parts or as a whole fail to provide any indication that the elements recited are unconventional or otherwise more than what is well understood, conventional, routine activity in the field. The remaining dependent claims—which impose additional limitations—also fail to claim patent-eligible subject matter because the limitations cannot be considered statutory. In reference to claims 22-29 these dependent claim have also been reviewed with the same analysis as independent claim 21. Dependent claim 22 is directed toward receiving data, derive language metadata, parse data to derive key features- directed toward data collection, manipulation and analysis – a applying technology to analyze data- a directed toward business practice. Dependent claim 23 is directed toward retrieve data from a database-applying technology to retrieve data- directed toward business practice. Dependent claim 24 is directed toward generate feature vector based on features and historical data- directed toward data organization- applying technology to organize data- directed toward application in business using technology. Dependent claim 25 is directed toward generate profile data based on borrower data and key features- directed toward business practice. Dependent claim 26 is directed toward generate updated feature vector based on condition and generating lending verdict – data manipulation and directed toward a business practice. Dependent claim 27 is directed toward applying threshold exceeded generating updating vector feature and generate lending verdict- applying analysis parameters to updated vector features and generating lending verdict. Dependent claim 28 is directed toward recording data – a common practice in business data management. Dependent claim 29 is directed toward retrieving lending parameters responsive to consensus among lending server node and lender entity node. The dependent claim(s) have been examined individually and in combination with the preceding claims, however they do not cure the deficiencies of claim 21. Where all claims are directed to the same abstract idea, “addressing each claim of the asserted patents [is] unnecessary.” Content Extraction & Transmission LLC v. Wells Fargo Bank, Nat 7 Ass ’n, 776 F.3d 1343, 1348 (Fed. Cir. 2014). If applicant believes the dependent claims 22-29 are directed towards patent19eligible subject matter, they are invited to point out the specific limitations in the claim that are directed towards patent eligible subject matter. In reference to Claims 30: STEP 1. Per Step 1 of the two-step analysis, the claims are determined to include a non-transitory computer-readable medium comprising instructions, as in independent Claim 30. Such mediums fall under the statutory category of "manufacture." Therefore, the claims are directed to a statutory eligibility category. STEP 2A Prong 1. The instructions of medium claim 30 corresponds to system claim 1. Therefore, claim 30 has been analyzed and rejected as being directed toward an abstract idea of the categories of concepts directed toward methods of organizing human activity previously discussed with respect to claim 1. STEP 2A Prong 2: The instructions of medium claim 30 corresponds to system claim 1. Therefore, claim 30 has been analyzed and rejected as failing to provide limitations that are indicative of integration into a practical application, as previously discussed with respect to claim 1. STEP 2B; The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because as discussed above with respect to concepts of the abstract idea into a practical application. The additional elements beyond the abstract idea include a non-transitory computer readable medium comprising instructions that when read cause a processor to perform the operations corresponding to claim 1–is purely functional and generic. Nearly computer implemented process will include a non-transitory computer readable medium comprising instructions for a processor to perform “- As a result, none of the hardware recited by the medium claims offers a meaningful limitation beyond generally linking the use of the method to a particular technological environment, that is, implementation via computers. Evidence with respect to conventional and routing include: Para 0086 discloses the readable medium -the computer computer-readable instructions may reside in random access memory ("RAM"), flash memory, read-only memory ("ROM"), erasable programmable read-only memory ("EPROM"), electrically erasable programmable read-only memory ("EEPROM"), registers, hard disk, a removable disk, a compact disk read-only memory ("CD-ROM"), or any other form of storage medium known in the art. [0087] An exemplary storage medium may be coupled to the processor such that the processor may read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application specific integrated circuit ("ASIC"). In the alternative embodiment, the processor and the storage medium may reside as discrete components. For example, FIG. 5 illustrates an example computing device (e.g., a server node) 500, which may represent or be integrated in any of the above-described components, etc. The specification makes clear that the “generate…predictive model” and “train …model” is generic as the specification and claims lack technical disclosure and is merely applied in the claim as a generic computer component to perform the lending prediction analysis. The specification discloses the “training” and “generating” of the model at a high level lacking technical disclosures, instead the specification focuses on the data acted upon and the application of the model to analyze financial data for a financial result. [0010]…” generating at least one feature vector based on the plurality of features and the local historical borrowers' -related data; and providing the at least one feature vector to the ML module configured to generate a predictive model for producing at least one lending parameter for generation of the borrower-related lending verdict for the at least one lender entity node” [0011]… providing the at least one feature vector to the ML module configured to generate a predictive model for producing at least one lending parameter for generation of the borrower-related lending verdict for the at least one lender entity node. [0050]… The LS node 102 may ingest the feature vector data into an AI/ML module 107. The AI/ML module 107 may generate a predictive model(s) 108 based on the feature vector data to predict lending parameters for automatically generating a lending verdict and/or lending recommendations to be provided to the lender entities 113 ( e.g., loan officers, underwriters, other practitioners, etc.). The lending parameters and/or loan risk assessment parameters may be further analyzed by the LS node 102 prior to generation of the loan verdict…. [0057]… The LS node 102 may ingest the feature vector data into an AI/ML module 107. The AI/ML module 107 may generate a predictive model(s) 108 based on the feature vector data to predict lending parameters for automatically generating a lending verdict and/or lending recommendations to be provided to the lender entities 113 ( e.g., loan officers, underwriters, other practitioners, etc.)…. [0060] The AI/ML module 107 may generate a predictive model(s) 108 to predict the lending verdict and/ or lending recommendation parameters for the borrower 111 in response to the specific relevant pre-stored borrowers' -related data acquired from the blockchain 110 ledger 109. [0064] The AI/ML module 107 may generate a predictive model(s) 108 based on the received borrower-related data 202 and the borrowers' -related data provided by the LS node 102. As discussed above, the AI/ML module 107 may provide predictive outputs data in the form of lending parameters for automatic generation of landing verdict and/ or landing recommendations for the lender entities 113 (see FIG. 1B). [0069] The processor 204 may fetch, decode, and execute the machine-readable instructions 222 to provide the at least one feature vector to the ML module 107 configured to generate a predictive model 108 for producing at least one lending parameter for generation of the borrower-related lending verdict for the at least one lender entity node… After the lone is closed, the related documents may be converted into unique secure NFT assets to be recorded on the blockchain to be used for lending model training. [0073]… the processor 204 may provide the at least one feature vector to the ML module configured to generate a predictive model for producing at least one lending parameter for generation of the borrower-related lending verdict for the at least one lender entity node. [0079] In one disclosed embodiment, the lending parameters' model may be generated by the AI/ML module 107 that may use training data sets to improve accuracy of the prediction of the lending parameters for the lender entities 113 (FIG. 1A)… [0082]… The blockchain 110 can be used to significantly improve both a training process 402 of the machine learning model and the lending parameters' predictive process 40 5 based on a trained machine learning model… [0083] This can significantly reduce the collection time needed by the host platform 420 when performing predictive model training. For example, using smart contracts, data can be directly and reliably transferred straight from its place of origin ( e.g., from the LS node 102 or from borrowers' databases 103 and 106 depicted in FIGs. 1A -1B) to the blockchain 110. By using the blockchain 110 to ensure the security and ownership of the collected data, smart contracts may directly send the data from the assets to the entities that use the data for building a machine learning model… [0084] Furthermore, training of the machine learning model on the collected data may take rounds of refinement and testing by the host platform 420. Each round may be based on additional data or data that was not previously considered to help expand the knowledge of the machine learning model. In 402, the different training and testing steps (and the data associated therewith) may be stored on the blockchain 110 by the host platform 420. Each refinement of the machine learning model ( e.g., changes in variables, weights, etc.) may be stored on the blockchain 110. This, advantageously, provides verifiable proof of how the model was trained and what data was used to train the model. Furthermore, when the host platform 420 has achieved a finally trained model, the resulting model itself may be stored on the blockchain 110. [0085] After the model has been trained, it may be deployed to a live environment where it can make recommendation-related predictions/decisions based on the execution of the final trained machine learning model using the prediction parameters. In this example, data fed back from the asset 430 may be input into the machine learning model and may be used to make event predictions such as most optimal loan approval and loan scheduling parameters for the borrower based on the recorded borrower's data. Determinations made by the execution of the machine learning model ( e.g., lending verdict and lending recommendations, loan risk assessment data, etc.) at the host platform 420 may be stored on the blockchain 110 to provide auditable/verifiable proof. As one non-limiting example, the machine learning model may predict a future change of a part of the asset 430 (the lending recommendation parameters - i.e., assessment of risk of unsuccessful loan approval). The data behind this decision may be stored by the host platform 420 on the blockchain 110. The specification discloses the Chat communication in its application for financial process in communication with users lacking technical details, providing a laundry list of known communication protocols that can be applied for communication with a user in a financial process. . [0034] This process includes transparent lending recommendations/verdict mechanism that may be coupled with a secure communications chat channel (implemented over a blockchain network) which supports both parties to set and agree on the loan processing and terms with each other. In one embodiment, the chat channel may be implemented using a chat Bot. [0038] In one embodiment, borrower calls may be recorded, transcribed and processed by an AI-based chat bot configured to answer questions and also give feedback and relay the feedback from the lending server to the borrowers in an automated fashion. The responses may be based on other borrowers in similar situations across similar industries with similar requests and similar loan types. [0045] Referring to FIG. 1A, the example network 100 includes the lending server (LS) node 102 connected to a cloud server node(s) 105 over a network. The LS node 102 is configured to host an AI/ML module 107. The LS node 102 may receive borrower data from a borrower 111. The LS node 102 may receive a call data related to communication between the borrower 111 and responding entity that may be implemented as chat bot (not shown). [0046] The call data may have language indicator metadata representing the language of the borrower used during the call. The call data may refer to any communications such as borrower communications with the lending entities (i.e., loan officers, underwriters, agents, other practitioners, etc.) directly or via a chatbot application. In one embodiment, the call data may be processed by the LS node 102 using the pre-trained large language models. The LS node 102 may derive the language indicator and parse out the call data based on the language indicator metadata. In other words, the key features of the call data may be, advantageously, derived from the call data based on the language of the call or email or other communication. [0052] Referring to FIG. 1B, the example network 100' includes the lending server (LS) node 102 connected to a cloud server node(s) 105 over a network. The LS node 102 is configured to host an AI/ML module 107. The LS node 102 may receive borrower data from a borrower 111. The LS node 102 may receive a call data related to communication between the borrower 111 and responding entity that may be implemented as a chat bot (not shown). [0053] The call data may have language indicator metadata representing the language of the borrower used during the call. In one embodiment, the call data may be processed by the LS node 102 using the pre-trained large language models. The LS node 102 may derive the language indicator and parse out the call data based on the language indicator metadata. In other words, the key features of the call data may be, advantageously, derived from the call data based on the language of the call. [0062] Referring to FIG. 2, the example network 200 includes the LS node 102 connected to the borrower entity 101 (FIGs. 1A-B) to receive borrower data 201. The LS node 102 may be connected to a chat bot (not shown) to receive call data. [0076] With reference to FIG. 3B, at block 314, the processor 204 may receive borrower call data from a chat bot associated with the at least on lender entity node, the call data comprising data generated during borrower's communication with the chat bot; derive a language metadata from the call data; and parse the call data based on the language metadata to derive a plurality of key features. At block 316, the processor 204 may retrieve remote historical borrowers' -related data from at least one remote borrowers' database based on the local historical borrowers' -related data, wherein the remote historical borrowers' -related data is collected at locations associated with a plurality of lender entities affiliated with financial institutions. Note that the call data may be audio data and/or textual data including emails, messages, voice-to-text converted communications, etc [00143] Two nodes can be networked together, when one computing device 500 is able to exchange information with the other computing device 500, whether or not they have a direct connection with each other. The communication sub-module 562 supports a plurality of applications and services, such as, but not limited to World Wide Web (WWW), digital video and audio, shared use of application and storage computing devices 500, printers/scanners/fax machines, email/online chat/instant messaging, remote control, distributed computing, etc. The network may comprise a plurality of transmission mediums, such as, but not limited to conductive wire, fiber optics, and wireless. The network may comprise a plurality of communications protocols to organize network traffic, wherein application-specific communications protocols are layered, may be known to a person having ordinary skill in the art as carried as payload, over other more general communications protocols. The plurality of communications protocols may comprise, but not limited to, IEEE 802, ethernet, Wireless LAN (WLAN / Wi-Fi), Internet Protocol (IP) suite ( e.g., TCP /IP, UDP, Internet Protocol version 5 [1Pv5], and Internet Protocol version 6 [1Pv6]), Synchronous Optical Networking (SONET)/Synchronous Digital Hierarchy (SDH), Asynchronous Transfer Mode (ATM), and cellular standards (e.g., Global System for Mobile Communications [GSM], General Packet Radio Service [GPRS], Code-Division Multiple Access [CDMA], and Integrated Digital Enhanced Network [IDEN]). US Pub No. 2022/0291666 A1 by Cella et al -para 01144 wherein the prior at teaches applying conventional blockchain technology for smart contracts, data share, manage access control. US Pub No. 2022/0237646 A1 by Michel et al-para 0022, para 0055 wherein the prior art teaches in background context attribute vectors applied to data; US Pub No. 2022/0061236 A1 by Guan et al -para 0319 wherein the prior art teaches using as a storage device commonly used blockchain. US Pub No. 2019/0205773 A1 by Ackerman et al-para 0549 wherein the prior art teaches it is known in the art for applying popular mainstream distributive ledger architectures. The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network) ii. Performing repetitive calculations, Flook, 437 U.S. at 594, 198 USPQ2d at 199 (recomputing or readjusting alarm limit values); Bancorp Services v. Sun Life, 687 F.3d 1266, 1278, 103 USPQ2d 1425, 1433 (Fed. Cir. 2012) ("The computer required by some of Bancorp’s claims is employed only for its most basic function, the performance of repetitive calculations, and as such does not impose meaningful limits on the scope of those claims."); iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93 With respect to blockchain technology with smart contracts as evidence the examiner provides: NPL article “Blockchain Development History Overview” by the blockchainhistory.com Smart Contract Era (2013-2017) If Bitcoin proved the viability of decentralized currency, then Ethereum opened the era of "programmable blockchain," elevating blockchain from a simple "world ledger" to a "world computer." Important Milestones: End of 2013: Vitalik Buterin published the Ethereum whitepaper 2014: Ethereum crowdfunding, raising $18 million July 30, 2015: Ethereum mainnet launched Smart Contracts: Self-executing code contracts DApps: Rise of decentralized applications Blockchain History Timeline: From Inception to Future by Klever- discloses expanding blockchain technology with the writing of smart contract in 2015; discloses Klever blockchain introduces smart contracts in decentralized Apps With respect to LLM model chatbot, evidence that such application of this technology is well understood is provided in the article “History and Evolution of LLMs” by GeeksforGeeks “ The First Steps in NLP (1960s - 1990s) The journey of Large Language Models (LLMs) began in 1966 with ELIZA, a simple chatbot that mimicked conversation using predefined rules but lacked true understanding. By the 1980s, AI transitioned from manual rules to statistical models, improving text analysis. In the 1990s, Recurrent Neural Networks (RNNs) introduced the ability to process sequential data, laying the foundation for modern NLP. 2. The Rise of Neural Networks and Machine Learning (1997 - 2010) A breakthrough came in 1997 with Long Short-Term Memory (LSTM), which solved RNNs’ memory limitations, making AI better at understanding long sentences. By 2010, tools like Stanford’s CoreNLP helped researchers process text more efficiently. 3. The AI Revolution and the Birth of Modern LLMs (2011 - 2017) The AI revolution gained momentum in 2011 with Google Brain, which leveraged big data and deep learning for advanced language processing. In 2013, Word2Vec improved AI’s ability to understand word relationships through numerical representations. Then in 2017, Google introduced Transformers in “Attention is All You Need,” revolutionizing LLMs by making them faster, smarter, and more powerful. The instructions of medium claim 30 corresponds to the functions of system claim 1. Therefore, claim 30 has been analyzed and rejected as failing to provide additional elements that amount to an inventive concept –i.e. significantly more than the recited judicial exception. Furthermore, as previously discussed with respect to claim 1, the limitations when considered individually, as a combination of parts or as a whole fail to provide any indication that the elements recited are unconventional or otherwise more than what is well understood, conventional, routine activity in the field. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. “Steps of a Machine Learning Process” by Suslio (2021); An Overview of the end to end machine learning workflow by MLOps (Year: 2020); How to Prepare Data For Machine Learning by Brownlee (Year: 2020) THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARY M GREGG whose telephone number is (571)270-5050. The examiner can normally be reached M-F 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christine Behncke can be reached at 571-272-8103. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARY M GREGG/Examiner, Art Unit 3695 /CHRISTINE M Tran/Supervisory Patent Examiner, Art Unit 3695
Read full office action

Prosecution Timeline

Nov 13, 2023
Application Filed
Jan 11, 2024
Non-Final Rejection — §101
Apr 10, 2024
Interview Requested
Apr 10, 2024
Response Filed
Apr 16, 2024
Applicant Interview (Telephonic)
Apr 22, 2024
Examiner Interview Summary
Apr 26, 2024
Final Rejection — §101
Jul 01, 2024
Response after Non-Final Action
Jul 30, 2024
Applicant Interview (Telephonic)
Aug 04, 2024
Response after Non-Final Action
Aug 28, 2024
Request for Continued Examination
Aug 30, 2024
Response after Non-Final Action
Sep 28, 2024
Non-Final Rejection — §101
Jan 28, 2025
Response Filed
Mar 31, 2025
Final Rejection — §101
Apr 09, 2025
Interview Requested
Apr 16, 2025
Interview Requested
Apr 23, 2025
Applicant Interview (Telephonic)
May 05, 2025
Examiner Interview Summary
Jun 30, 2025
Request for Continued Examination
Jul 01, 2025
Response after Non-Final Action
Oct 11, 2025
Non-Final Rejection — §101
Jan 12, 2026
Response Filed
Mar 31, 2026
Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12450653
FIRM TRADE PROCESSING SYSTEM AND METHOD
2y 5m to grant Granted Oct 21, 2025
Patent 12443991
MINIMIZATION OF THE CONSUMPTION OF DATA PROCESSING RESOURCES IN AN ELECTRONIC TRANSACTION PROCESSING SYSTEM VIA SELECTIVE PREMATURE SETTLEMENT OF PRODUCTS TRANSACTED THEREBY BASED ON A SERIES OF RELATED PRODUCTS
2y 5m to grant Granted Oct 14, 2025
Patent 12217312
System and Method for Indicating Whether a Vehicle Crash Has Occurred
2y 5m to grant Granted Feb 04, 2025
Patent 11900469
Point-of-Service Tool for Entering Claim Information
2y 5m to grant Granted Feb 13, 2024
Patent 11861715
System and Method for Indicating Whether a Vehicle Crash Has Occurred
2y 5m to grant Granted Jan 02, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
14%
Grant Probability
28%
With Interview (+14.3%)
5y 3m
Median Time to Grant
High
PTA Risk
Based on 629 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month