DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/30/2025 has been entered.
Status of Claims
Claims 1 and 9 are independent claims, and are amended.
Claims 5 and 13 were previously canceled.
Claims 1-4, 6-12, and 14-16 are pending in this application and have presented for examination on merits.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 6-12, and 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over DASARI, US Pub. No. 2023/0055940 A1 (hereinafter as “Dasari”), in view of Jha, US Patent No. 10,789,195 B1 (hereinafter as “Jha”) and SHAH et al., US Pub. No. 2018/0150528 A1 (hereinafter as “Shah”), and further in view of Xing et al., US Pub. No. 2022/0414113 A1 (hereinafter as “Xing”), Jooste, US Pub. No. 2013/0050253 A1 (hereinafter as “Jooste”) and Nakagawa, US Patent No. 5,854,628 (hereinafter as “Nakagawa”).
Regarding claim 1, Dasari teaches a method, comprising:
initiating a stream producer that sends formatted data to a topic (par. [0024] teaches the Kafka-SQL (SQL based) and Kafka-Streams from the producer(s)/client, see Fig. 1, elements 112s; and Fig. 2, element 205, 210; and pars. [0012] “The data ingestion layer may be configured to receive data from one or more of the plurality of data producers, wherein the data may be in any format”; [0024] e.g., “Kafka-Streams (procedure based…” is interpreted as the initiated stream producer; and [0037], e.g., “a Kafka-SQL (KSQL)” and “the processed topic… the data is transformed (e.g., storing in messaging/staging layer 130 under a different table or topic…”);
ingesting, by a sink connector, the formatted data into an object storage service (Fig. 1 is shown as the ingesting the data in different formats by a sink connector(s) at element 120 into any object storage service through layers to store in database storage at elements 160s and/or Data Archive Layer 170, and pars. [0023] e.g., “formats”, “archive stores”, in ingestion layer, [0032] “Ingestion layer 120”, and [0038] “Data connect layer 150 may a Kafka-connect (KConnect) cluster solution and may connect to any pod…”);
implementing an API gateway for secure access to inference results (Dasari teaches the ingestion application programming interface (API) in the data ingestion layer, which is interpreted as the gateway, for secure access, see Abstract, Fig. 1, element 124 and elements 160s, and pars. [0009-13] discloses the enriched/transformed data store in the data store(s) as results, pars. [0024] “API layer”, [0032] “Ingestion layer 120 may provide two types of load balanced solutions: (1) Producer proxy agent (PPA) layer 122 and (2) ingestion API layer 124. PPA layer 122 allows for ingestion of all data types and all protocols (tcp, udp, ftp etc.) in secured manner”, [0037] wherein the “retrieving the data” implies the access to the results stores in data stores 160s, [0043] “ingestion layer 120 may be provided based on security importance of the data in the zone and other factors.”, and [0054] teaches the inference results, e.g., “return the enriched/transformed data to the data messaging layer,…”).
Dasari teaches the methods and systems for universal formatted data (Abstract: “ingesting different data types…”) ingestion from the clients/producers, who locate in any suitable zone, including enterprise servers farms (ESFs), secure enterprise server farms (SESF), and public cloud zones (Figs. 1-2; pars. [0024, 29, 37-38]).
However, Dasari does not explicitly teach: “implementing an event-driven serverless compute, wherein the event-driven serverless compute is triggered automatically when any new data is ingested to the object storage service, and wherein the event-driven serverless compute reads the JSON data, converts it to transformed data, and writes the transformed data to a distributed data store; creating an ETL job, wherein the ETL job reads the data, further transforms the data, and writes it back into the distributed data store as ETL transformed data; sending the ETL transformed data to an LLM API in batches to create inference results, wherein the batches are queued to manage the rate limits; storing the inference results in cache storage; executing a double buffering process for an API endpoint by: serving data from a first, active file associated with the API endpoint; concurrently processing an updated dataset, comprising both old and new data to create a second inactive file; and upon completion of the processing, switching the API endpoint to designate the second file as the new active file to provide seamless data updates; and utilizing the cache storage for repeated calls to the API gateway to access the inference result.”
In the same field of endeavor (i.e., data processing), Jha teaches:
implementing an event-driven serverless compute, wherein the event-driven serverless compute is triggered automatically when any new data is ingested to the object storage service (Abstract: “generate a serverless application stack, based upon the message bus producer, the set of business logic, the message bus consumer, and the set of message-handling functions”; “an event-driven serverless compute” is trigger “automatically” when collect/ingest data to storage service is well-routine, conventional in the art known by a skill artisan, see col. 1, lines 50-60; and further in col. 5, lines 62-67 to col. 6, lines 1-7; col. 7, lines 29-35 teaches the event-driven serverless computing; and col. 9, lines 18-20 “the successful messages are automatically written to a message bus consumer, as specified in the serverless application stack.”).
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to combine the teachings of Jha with the teachings of Dasari with the above indicated limitations to provide a skill artisan to motivate in implementing the event-driven serverless computing to automatically trigger the ingested data from the stream message loading into the storage service from producer(s) to consumer(s) efficiently (Jha: Figs. 2-3, and cols. 5-7, and col. 9, lines 18-20).
Dasari teaches the methods and systems for universal formatted data (Abstract: “ingesting different data types…”) ingestion from the clients/producers into database stores (Figs. 1-2; pars. [0024, 29, 37-38]). In the same field of endeavor, Jha teaches the building/generating/implementing the event-driven serverless computing, which is triggered automatically the collected/ingested data in the message streams into the object storage service via cloud network in the cloud containers/data center database storages (Abstract; Figs. 6-8; and col. 5, lines 47-67 to col. 6, lines 1-7; and col. 7, lines 1-45) and batch size selection and queue selection (Fig. 4, element 232 and element 246; and col. 5, lines 18-30, e.g., “batch processing”).
However, Dasari and Jha do not explicitly teach that the event-driven serverless compute “reads JavaScript Object Notation (JSON) data, converts it to transformed data, and writes the transformed data to a distributed data store; creating an Extract, Transform, Load (ETL) job, wherein the ETL job reads the data, further transforms the data, and writes it back into the distributed data store as ETL transformed data; sending the ETL transformed data to an Large Language Model (LLM) Programming Interface (API) in batches to create inference results, wherein the batches are queued to manage the rate limits; storing the inference results in cache storage; executing a double buffering process for an API endpoint by: serving data from a first, active file associated with the API endpoint; concurrently processing an updated dataset, comprising both old and new data to create a second inactive file; and upon completion of the processing, switching the API endpoint to designate the second file as the new active file to provide seamless data updates; and utilizing the cache storage for repeated calls to the API gateway to access the inference result.”
In the same field of endeavor (i.e., data processing), Shah teaches:
wherein the event-driven serverless compute reads the JSON data, converts it to transformed data, and writes the transformed data to a distributed data store; (see Fig. 1B and par. [0038] discloses the “provide a serverless architecture”; further in pars. [0042] “converting a Javascript Object Notation (JSON) file into a relational table format may be identified and compared with the source and target formats for the job” which teaches “read” the JSON data in the JSON file to convert into table format/target formats as known by a skill artisan, and [0030] discloses the distributed data store)
creating an Extract, Transform, Load (ETL) job, wherein the ETL job reads the data, further transforms the data, and writes it back into the distributed data store as ETL transformed data; (pars. [0032] “perform ETL jobs that extract, transform, and load from one or more of the various data storage service(s) 210 to another location… perform an ETL operation (e.g., a job to convert a data object from one file type into one or more other data objects of a different file type)”, [0033] “invoke the execution of an ETL job (e.g., a transformation workflow) to make data available for processing in a different location, data schema, or data format for performing various processing operations”)
sending the ETL transformed data to an Large Language Model (LLM) Programming Interface (API) in batches to create inference results (Fig. 4 shown the ETL transformed data sends; par. [0016] wherein the “data transformation workflows” with the diverse set of data formats for storing data objects as interpreted as the inference results, and [0021] via “aggregate, combine, group” data values in the data schema to target schema implements the batches as known by as skill artisan; pars. [0019 and 23] and [0073-74] discloses the LLM API, and [0031] “programmatic interfaces (e.g., APIs) or graphical user interfaces.”, [0037] “an ETL service that generates data transformation workflows, according to some embodiments. ETL service 220 may provide access to data catalogs 360 and ETL jobs (for creation, management, and execution) via interface 310, which may be a programmatic interface (e.g., Application Programming Interface (API)), command line interface, and/or graphical user interface, in various embodiments”);
Thus, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to combine the teachings of Shah with the teachings of Dasari and Jha with the above indicated limitations to allow a skill artisan in motivation to perform the ETL job and transform data of the data objects/files in different formats for more efficient in data transformation workflows (Shah: Figs. 1-4 and pars. [0019-20] and [0026-27]).
Dasari teaches the methods and systems for universal formatted data (Abstract: “ingesting different data types…”) ingestion from the clients/producers into database stores (Figs. 1-2; pars. [0024, 29, 37-38]). In the same field of endeavor, Jha teaches the building/generating/implementing the event-driven serverless computing, which is triggered automatically the collected/ingested data in the message streams into the object storage service via cloud network in the cloud containers/data center database storages (Abstract; Figs. 6-8; and col. 5, lines 47-67 to col. 6, lines 1-7; and col. 7, lines 1-45), and read and write data from Kinesis selection in the batch size selection and queue selection (Fig. 4, element 232 and element 246; and col. 5, lines 18-30, e.g., “batch processing”). Shah teaches the methods and systems for transformation workflows via ETL service from one formatted data/file into different formatted data/files including update dataset in distributed databased store(s) (Figs. 1-4; and pars. [0030-33, 42]).
However, Dasari, Jha, and Shah do not explicitly teach “wherein the batches are queued to manage the rate limits; storing the inference results in cache storage; and executing a double buffering process for an API endpoint by: serving data from a first, active file associated with the API endpoint; concurrently processing an updated dataset, comprising both old and new data to create a second inactive file; and upon completion of the processing, switching the API endpoint to designate the second file as the new active file to provide seamless data updates; and utilizing the cache storage for repeated calls to the API gateway to access the inference result.”
In the same field of endeavor (i.e., data processing), Xing teaches:
wherein the batches are queued to manage the rate limits; (par. [0020] “he ETL system 300 can comprises various components contributing to the processing of ETL batches which can contain very large volumes of data (e.g., millions of records). These components can include a controller system (e.g., batch controller), systems from which the controller extracts data, SaaS applications or Enterprise systems, and a processing engine that processes extracted records”; Fig. 4 via “Work Queue”; and pars. [0003] “A concurrency rate limit is a rate limit defined by a maximum permitted number of requests that are allowed to be pending at any one time. An ETL processing system may be rate-limited by retry with delay, throttling, leaky bucket, fixed window, sliding log, sliding window and/or pacing” teaches well-known, routine, conventional to a skill artisan in the ETL operations field; and [0023-27] sets rate limits to the ETL processing method with the ETL batches);
storing the inference results in cache storage; (Fig. 3, element 176 – cache storage; and Fig. 4 shown the data processing in ETL engine when data extract from source system and load=store into target system); and
utilizing the cache storage for repeated calls to the API gateway to access the inference result (Fig. 3, element 176 – Cache storage; par. [0028] “The requeued units of data can be those that have been requeued via feedback data path 3 from the ETL processing engine to the queue as a result of a rate limit error having been generated by the target system during the load phase of the ETL method. When the pacing algorithm detects that the proportion of requeued requests in the work queue is above the threshold defined by the rate limit coefficient”, wherein the “requeued requests” are interpreted as the repeated calls, see further in par. [0029] “allow requeued requests to be reprocessed and loaded without new requests being added to the queue”; and par. [0041] via “requeued” and/or “repeated” load requests algorithm)
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to combine the teachings of Xing with the teachings of Dasari, Jha, and Shah with the above indicated limitations to provide a skill artisan to motivate in using the work queue to manage the batches loading into cache storage with the rate limits set to the workflows (Xing: Figs. 3-6 and pars. [0005] and [0023-27] for set rate limits to ETL process/operations).
However, Dasari, Jha, Shah, and Xing do not explicitly teach the amended limitations: “executing a double buffering process for an API endpoint by: serving data from a first, active file associated with the API endpoint; concurrently processing an updated dataset, comprising both old and new data to create a second inactive file; and upon completion of the processing, switching the API endpoint to designate the second file as the new active file to provide seamless data updates.”
In the same field of endeavor (i.e., data processing), Jooste teaches:
executing a double buffering process for an API endpoint (Fig. 1, element 121 at client 120 as interpreted as the API endpoint; par. [0036] e.g., “a double-buffering approach”, and par. [0047] e.g., “application program interface (API) 512… as Web browser-based”, and par. [0050]) by: serving data from a first, active file associated with the API endpoint (par. [0036] “two images (buffers) are used, each of which typically represent an entire screen of image data. A first one of the two images is initially presented on the display” teaches the first, active file).
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to combine the teachings of Jooste with the teachings of Dasari, Jha, Shah, and Xing with the above amended limitations to provide a skill artisan in motivation as modified by using a double-buffering execution/processes to change/update the dataset/elements in the buffer images with the switch operation efficiently (Jooste: Figs. 1 and 6; and pars. [0036]).
However, Dasari, Jha, Shah, Xing, and Jooste do not explicitly teach the amended limitations: “concurrently processing an updated dataset, comprising both old and new data to create a second inactive file; and upon completion of the processing, switching the API endpoint to designate the second file as the new active file to provide seamless data updates.”
In the same field of endeavor (i.e., data processing), Nakagawa teaches:
concurrently processing an updated dataset, comprising both old and new data to create a second inactive file and upon completion of the processing, switching the API endpoint to designate the second file as the new active file to provide seamless data updates (Col. 1, lines 52-67 and col. 2, lines 1-2 via completed, produced, erase the frame buffers, window, images, and the switching technique; Col. 3, lines 30-44: “executing window processing based on multiple buffers like double buffers” and “a plurality of frame buffers for storing image information”, and in lines 54-59: “rewrite control section for rewriting associated overlapping information while making window ID information of corresponding windows over a plurality of frame buffers common to one another, thereby ensuring fast switching of the corresponding windows”, wherein the rewrite is interpreted as update, and the window(s) is/are interpreted the file(s); Col. 4, lines 66-67 to col. 5, lines 1-5; col. 7, lines 60-67, wherein the “changing the matrix element”; and Col. 10, lines 6-31: via change values of the displayed windows, and wherein the monitor screen is interpreted as API endpoint (see Fig. 1, element 3, Figs. 2-4 and Fig. 9)).
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to combine the teachings of Nakagawa with the teachings of Dasari, Jha, Shah, Xing, and Jooste with the above amended limitations to provide a skill artisan in motivation as modified by processing updated content values/matrix elements in table and switching the window(s) for managed monitor screen(s) displaying multimedia images, such as a document-like texts, video, audio, etc. (Nakagawa: Abstract, Col. 1, and Cols. 3, 6, 9-10).
Regarding claim 2, Shah and Xing, in combination, teach: “wherein the ETL is triggered whenever the transformed data is added to the distributed data store” (Shah: Fig. 4 discloses the distributed data store for the ETL transformed data, par. [0043] “add the values to multiple data fields”, and par. [0073]; and Xing: Figs. 2-3, and Fig. 4, wherein the loading=added/stored the extracted data from source system into the target system= distributed data store; and par. [0017] wherein a distributed data processing system having data storage).
Regarding claim 3, Jha teaches: “wherein the object storage service further comprises a Multi-vendor data lake” (Jha: Fig. 3, element 206I; and col. 1, lines 40-41, and 50-51 disclose the “serverless computing vendor” and “vendor-provided resources”, col. 3, lines 57-60 “cloud-base service provided by a serverless computing vendor, such as a cloud-based streaming service, a cloud-based storage system, or a data lake, in some non-limiting examples…”).
Regarding claim 4, Jha teaches: “wherein the distributed data store further comprises a cloud storage container” (Jha: col. 3, lines 57-60 “cloud-base service provided by a serverless computing vendor, such as a cloud-based streaming service, a cloud-based storage system, or a data lake, in some non-limiting examples…”; and col. 5, lines 63-64 “stored in set of containers”; and col. 7, line 28: “stored in a serverless provider container”; and col. 8, lines 48-50: “implemented in one or more containers, within a serverless platform, such as Amazon Lambda or similar platform”).
Regarding claim 6, Jha teaches: “wherein the stream producer further comprises an open-source distributed event streaming platform” (Jha: Fig. 3 at element 202 “Message Bus Producer” with open-source distributed event streaming platform(s), see further in col. 4, lines 14-18: “a source function may be provided to generate the messages or for fetching messages from other sources such as Amazon S3, Amazon Kinesis, streaming data platforms, Dynamo DB, Apache Kafka, Postages DB, Cloudwatch, and so forth”, and lines 20-24, and lines 24-40 including “writing of events”).
Regarding claim 7, Dasari, Jha, Shah, and Xing, in combination, teach: “initiating additional functions if required to retrieve results from cache and provide them to the end user” (Dasari: par. [0037] “retrieving the data from Kafka”; Jha: Abstract: “a designation of a set of message-handling functions” and col. 4, line 29 “maintain a message’s unique identifier in cache memory”; and Shah: par. [0053] “A request may be made to the data store to retrieve the data schema information as well as other information about the source data object, like data format. As indicated at 720, at least a target data format for the data object may be identified, in some embodiments.”; and Xing: par. [0021], “cache memory 176”, and Fig. 3 at element 176 – CACHE storage).
Regarding claim 8, Dasari teaches: “wherein while the Agent prepares to accommodate the new data, incoming queries are directed to the initial file” (Dasari: par. [0024] “a collection layer that may support various collection agents to collect the data” teaches the prepares to accommodate the new data, incoming queries cluster/ingest to the ingestion layer and directed/loaded to database(s) (see fig. 1), wherein having flat file, see par. [0071], the “flat file” is interpreted as the initial file; and par. [0029] discloses the “agents” as the “plurality of producers” who prepare new data to send to ingestion layer for clustering/ingesting. See MPEP 2111 for claim interpretation).
Regarding claim 9, the claim is rejected by the same reasons set forth above to claim 1. Furthermore, Shah teaches: “wherein the event-driven serverless compute reads the JSON data, converts it to CSV transformed data, and writes the CSV transformed data to a distributed data store” (Shah: see Fig. 1B, and par. [0038] discloses the “provide a serverless architecture”; further in pars. [0019] “Comma Separated Value (CSV)… the source data format for data object 112 may be a CSV file, indicating that the fields of data in entries (e.g., rows) of the table are separated by columns. Further examples of data formats and data schemas are discussed below with regard to FIG. 1B.”, [0042] “converting a Javascript Object Notation (JSON) file into a relational table format may be identified and compared with the source and target formats for the job” which teaches “read” the JSON data in the JSON file to convert into table format/target formats as known by a skill artisan, and [0030] discloses the distributed data store)
Claims 9-12, and 14-16 are rejected in the analysis of above claims 1-4, 6-8; and therefore, the claims are rejected on that basis.
Response to Arguments
Referring to claim rejections under 35 U.S.C. 103, Applicant’s arguments filed on 09/30/2025 to the newly amended limitations in claims 1 and 9 (see Remarks, pages 7-9) have been fully considered, but are moot in view of the new grounds of rejection necessitated by applicant's amendment to the claims. Applicant's newly amended features are taught implicitly, expressly, or impliedly by the prior art of record. Please see the above rejections set forth above for details.
Prior Arts
The prior art made of record on form PTO-892 and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action.
It is noted that any citation to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. See In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275,277 (CCPA 1968)); Merck & Co. v. Biocraft Laboratories, 874 F.2d 804, 10 USPQ2d 1843 (Fed. Cir.), cert. denied, 493 U.S. 975 (1989).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jessica N. Le whose telephone number is (571)270-1009. The examiner can normally be reached M-F 9:30 am - 5:30 pm (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SHERIEF BADAWI can be reached on (571) 272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Jessica N Le/Examiner, Art Unit 2169
/SHERIEF BADAWI/Supervisory Patent Examiner, Art Unit 2169