DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-18 are currently pending in application 18/367,438.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-18 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Sreenivasan (US 2022/0237700 A1)
As per independent Claim 1 and 16-18, Sreenivasan discloses an optimization application configuring apparatus (system, method, programmed apparatus), comprising: at least one memory; a component collection stored in the at least one memory; at least one processor disposed in communication with the at least one memory, the at least one processor executing processor-executable instructions from the component collection, the component collection storage structured with processor-executable instructions (See at least Figs.122-123, Figs. 172-173; Para 0332-0339) comprising: obtain, via the at least one processor, an optimization application configuration request associated with an optimization application, in which the optimization application configuration request is structured as specifying a plurality of optimization modules to configure for the optimization application, in which an optimization module corresponds to an optimization configuration comprising a distinct combination of an optimizer [artificial intelligence modules] and a solver [Within the AI Module - Completes aggregation of the recommendation data] (See at least Fig. 172; Para 0325; and Para 0018, “…wherein the server is in communication with a plurality of distinct artificial intelligence modules operable to analyze data regarding one or more securities and generate recommendation data regarding the one or more securities, wherein the server generates a suggested portfolio securities allocation based on a weighted aggregation of the recommendation data generated by each of the plurality of distinct artificial intelligence modules and based on the risk tolerance information and the desired returns over the one or more time periods associated with the plurality of user profiles, wherein the weighted aggregation of the recommendation data is weighted based on historical data regarding the correlation of the recommendation data of each of the plurality of distinct artificial intelligence modules with previous performance data of each of the one or more securities, and wherein the plurality of distinct artificial intelligence modules includes a sentiment analysis module configured to analyze sentiment data regarding the one or more securities.”, While "solver" often refers specifically to the mathematical algorithm (like Gurobi or SLSQP), in the context of these AI-driven systems, the term often encompasses the entire computational logic that synthesizes these disparate AI recommendations into a single optimized portfolio); generate, via the at least one processor, a first optimization configuration datastructure for a first optimization module from the plurality of optimization modules, in which the first optimization configuration datastructure is structured as specifying a first cloud function for the first optimization module, a first API path for the first optimization module, and an identifier of an application load balancer to utilize for the optimization application, in which the application load balancer is structured as triggering execution of the first cloud function in response to a request specifying the first API path; generate, via the at least one processor, a second optimization configuration datastructure for a second optimization module from the plurality of optimization modules, in which the second optimization configuration datastructure is structured as specifying a second cloud function for the second optimization module, a second API path for the second optimization module, and the identifier of the application load balancer to utilize for the optimization application, in which the application load balancer is structured as triggering execution of the second cloud function in response to a request specifying the second API path; and provide, via the at least one processor, the first optimization configuration datastructure and the second optimization configuration datastructure to a cloud configuration server, in which the cloud configuration server is structured as initializing the application load balancer in accordance with the provided optimization configuration data structures (See at least Fig. 172; Para 0325, “FIG. 172 illustrates one embodiment of a cloud-based system for the AI investment platform. In one embodiment, the AI investment platform is application programming interface (API) compatible with cloud infrastructures including, but not limited to, AMAZON WEB SERVICES (AWS), MICROSOFT AZURE, and/or GOOGLE CLOUD PLATFORM. The cloud-based system includes at least one server computer, at least one user device, at least one cloud (e.g., private cloud, public cloud), at least one container cluster (e.g., KUBERNETES), at least one application, at least one database, at least one network load balancer, at least one application load balancer, third party applications and/or data providers (e.g., POLYGON.IO, PLAID), at least one workflow manager (e.g., APACHE AIRFLOW), and/or at least one virtual container packager (e.g., DOCKER).”; Para 0337, “…In one embodiment, the cloud-based server platform hosts serverless functions for distributed computing devices 820, 830, and 840.”; and Para 0338; See also Figs.122-123, Figs. 148-150, Fig. 173; Para 0234-0236, Para 0249-0250, Para 0254-0278, Para 0293-0294, and Para 0326).
As per Claim 2, Sreenivasan discloses the apparatus of claim 1, in which the component collection storage is further structured with processor-executable instructions, comprising: provide, via the at least one processor, a first deployment package associated with the First cloud function to the cloud configuration server; and provide, via the at least one processor, a second deployment package associated with the second cloud function to the cloud configuration server (See at least Figs. 172-173; Para 0325-0327).
As per Claim 3, Sreenivasan discloses the apparatus of claim 1, in which the first optimization configuration datastructure is structured as specifying a first cloud function dependency, and in which the second optimization configuration datastructure is structured as specifying a second cloud function dependency (See at least Figs. 172-173; Para 0325-0327).
As per Claim 4, Sreenivasan discloses the apparatus of claim 3, in which the component collection storage is further structured with processor-executable instructions, comprising: provide, via the at least one processor, a first dependency deployment package associated with the first cloud function dependency to the cloud configuration server; and provide, via the at least one processor, a second dependency deployment package associated with the second cloud function dependency to the cloud configuration server (See at least Figs. 172-173; Para 0325-0327).
As per Claim 5, Sreenivasan discloses the apparatus of claim 4, in which the first dependency deployment package and the second dependency deployment package share a common code base (See at least Figs. 172-173; Para 0325-0327).
As per Claim 6, Sreenivasan discloses the apparatus of claim 1, in which the optimization application configuration request is structured as specifying cached data repository settings for the optimization application (See at least Figs. 172-173; Para 0325-0327).
As per Claim 7, Sreenivasan discloses the apparatus of claim 6, in which the cached data repository settings are structured to specify an IP address and a port of a cached data repository, in which the cached data repository is structured as storing data retrieved from a set of source data repositories and transformed into a cached data format utilized by the optimization application (See at least Figs. 172-173; Para 0325-0327).
As per Claim 8, Sreenivasan discloses the apparatus of claim 6, in which the first optimization configuration datastructure is structured as specifying the cached data repository settings, and in which the second optimization configuration datastructure is structured as specifying the cached data repository settings (See at least Figs. 172-173; Para 0325-0327).
As per Claim 9, Sreenivasan discloses the apparatus of claim 1, in which the first optimization configuration datastructure is structured as specifying a first number of concurrent cloud function instances for the first cloud function, and in which the second optimization configuration datastructure is structured as specifying a second number of concurrent cloud function instances for the second cloud function (See at least Figs. 172-173; Para 0325-0327).
As per Claim 10, Sreenivasan discloses the apparatus of claim 9, in which the first number of concurrent cloud function instances and the second number of concurrent cloud function instances are identical (See at least Figs. 172-173; Para 0325-0327).
As per Claim 11, Sreenivasan discloses the apparatus of claim 1, in which the First optimization configuration datastructure is structured as specifying first runtime environment settings, and in which the second optimization configuration datastructure is structured as specifying second runtime environment settings (See at least Figs. 172-173; Para 0325-0327).
As per Claim 12, Sreenivasan discloses the apparatus of claim 1, in which the application load balancer is structured as triggering execution of the first cloud function in response to the request specifying the first API path on an instance of the first cloud function that depends on a requester's region (See at least Figs. 172-173; Para 0325-0327).
As per Claim 13, Sreenivasan discloses the apparatus of claim 1, in which the component collection storage is further structured with processor-executable instructions, comprising: generate, via the at least one processor, a third optimization configuration datastructure for a third optimization module from the plurality of optimization modules, in which the third optimization configuration datastructure is structured as specifying a third cloud function for the third optimization module, a third API path for the third optimization module, and the identifier of the application load balancer to utilize for the optimization application, in which the application load balancer is structured as triggering execution of the third cloud function in response to a request specifying the third API path, in which the first optimization module and the third optimization module utilize an identical optimizer; and provide, via the at least one processor, the third optimization configuration datastructure to the cloud configuration server (See at least Figs. 172-173; Para 0325-0327).
As per Claim 14, Sreenivasan discloses the apparatus of claim 13, in which the component collection storage is further structured with processor-executable instructions, comprising: generate, via the at least one processor, a fourth optimization configuration datastructure for a fourth optimization module from the plurality of optimization modules, in which the fourth optimization configuration datastructure is structured as specifying a fourth cloud function for the fourth optimization module, a fourth API path for the fourth optimization module, and the identifier of the application load balancer to utilize for the optimization application, in which the application load balancer is structured as triggering execution of the fourth cloud function in response to a request specifying the fourth API path, in which the fourth optimization module and the second optimization module utilize an identical solver; and provide, via the at least one processor, the fourth optimization configuration datastructure to the cloud configuration server (See at least Figs. 172-173; Para 0325-0327).
As per Claim 15, Sreenivasan discloses the apparatus of claim 1, in which the optimization application is a portfolio optimizer structured as utilizing a set of security identifiers as an input (See at least Figs.122-123, Figs. 148-150, Figs. 172-173, Para 0234-0236, Para 0249-0250, Para 0254-0278, Para 0293-0294, and Para 0325-0337).
Response to Arguments
Applicant's arguments filed on 11/14/2025, with respect to Claims 1-18, have been considered but are not persuasive.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The Applicant’s arguments are addressed in the rejection above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN P OUELLETTE whose telephone number is (571)272-6807. The examiner can normally be reached on M-F 8am-6pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynda C Jasmin, can be reached at telephone number (571) 272-6782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
December 28, 2025
/JONATHAN P OUELLETTE/Primary Examiner, Art Unit 3629