DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Applicant filed a response dated October 16, 2025 in which claims 1, 8, and 15 have been amended; and claims 4-7, 11-14, and 18-20 have been canceled. Therefore, claims 1-3, 8-10, and 15-17 are currently pending in the application.
Priority
Application 18/049,433 was filed on October 25, 2022.
Examiner Request
The Applicant is requested to indicate where in the specification there is support for amendments to claims should Applicant amend. The purpose of this is to reduce potential 35 U.S.C. § 112(a) or § 112 1st paragraph issues that can arise when claims are amended without support in the specification. The Examiner thanks the Applicant in advance.
Claim Rejections - 35 USC § 101
35 U.S.C. § 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-3, 8-10, and 15-17 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. (MPEP 2106). The claims are directed to a method, system, and apparatus which is one of the statutory categories of invention (Step 1: YES). The recitation of the claimed invention is analyzed as follows, in which the abstract elements are boldfaced.
Claim 1 recites the limitations of:
A method comprising: receiving, at a producer application, balance transfer request data from multiple data sources;
formatting, by the producer application, the balance transfer request data into a plurality of balance transfer data records;
streaming, by the producer application, the plurality of balance transfer data records to an event streaming platform comprising a cluster of servers hosting a plurality of partitions;
replicating, by the event streaming platform, the plurality of balance transfer data records across the plurality of partitions;
formatting, by the event streaming platform, the plurality of balance transfer data records into one or more partitions of the plurality of partitions in a corresponding topic;
writing, by the event streaming platform to a consuming application, the balance transfer data records sequentially in the order in which they are added to the one or more partitions of the plurality of partitions in the corresponding topic;
sending, by the event streaming platform, the balance transfer data records in a sequential order in which the partitioned data was received to the consuming application;
formatting, by the consuming application, the balance transfer data records as parameterized data;
calling, by the consuming application, an exposed application program interface (API) from a real-time payment gateway of a real-time payment network;
sending, by the consuming application, a first part of one of the plurality of balance transfer data records to the real-time payment network as the parameterized data of the exposed API in response to the call, wherein the real-time payment network initiates settlement of a payment indicated in the first part of the one of the plurality of balance transfer data records;
subscribing, by the consuming application, to the corresponding topic via a consumer API;
sending, by the consuming application, a second part of the one of the plurality of balance transfer data records to a posting application via an exposed producer API; and
posting, by the posting application, a hard post of the payment indicated in the first part of the one of the plurality of balance transfer data records to an account ledger without requiring a prior soft post.
The claim as a whole recites a method that, under its broadest reasonable interpretation, covers collecting, analyzing, and transmitting data for the facilitation of a transfer of resources, such as cash or money, between users. This is a fundamental economic practice of a financial transaction; a commercial interaction, such as for business relations; and managing personal behavior or relationships or interactions between people, which are certain methods of organizing human activity.
Thus, the claims recite an abstract idea. (Step 2A, prong 1: YES).
Moreover, the judicial exception is not integrated into a practical application. Other than reciting a “producer application”, “an event streaming platform comprising a cluster of servers”, “consuming application”, “an exposed application program interface (API) from a real-time payment gateway of a real-time payment network”, “a consumer API”, and “a posting application”, to perform the steps of “formatting”, “replicating”, “writing”, “subscribing”, and “posting”, nothing in the claim elements preclude the steps from practically being a certain method for organizing human activity. The claim as a whole does not integrate the exception into a practical application. The claim merely describes how to generally “apply” the concept of collecting, analyzing, and transmitting data for the facilitation of a transfer of resources, such as cash or money, between users in a computer environment. The additional computer elements recited in the claim limitations are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception utilizing generic computer components.
For example, the Specification the Specification at [0073] discloses that “[t]he system of the invention or portions of the system of the invention may be in the form of a “processing machine,” such as a general-purpose computer”. Moreover, the Specification at [0054] discloses that the event streaming cluster “may be comprised of as many physical servers as is necessary or desirable to meet redundancy and throughput needs.”
Thus, the specification supports that general purpose computers or computer components are utilized to implement the steps of the abstract idea.
Merely implementing the abstract idea on a generic computer is not a practical application of the abstract idea. The claim as a whole, in viewing the additional elements both individually and in combination, does not integrate the judicial exception into a practical application. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. (Step 2A prong two: No)
The claim does not include additional elements, when considered both individually and as an ordered combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using “producer application”, “an event streaming platform comprising a cluster of servers”, “consuming application”, “an exposed application program interface (API) from a real-time payment gateway of a real-time payment network”, “a consumer API”, and “a posting application”, to perform the steps of “formatting”, “replicating”, “writing”, “subscribing”, and “posting”, amounts to no more than mere instructions to apply the exception using generic computer component. The claim merely describes how to generally “apply” the concept of collecting, analyzing, and transmitting data for the facilitation of a transfer of resources, such as cash or money, between users in a computer environment. Thus, even when viewed as a whole, nothing in the claim adds significantly more (i.e. an inventive concept) to the abstract idea. Such additional elements are determined to not contain an inventive concept according to MPEP 2106.05(f). It should be noted that (1) the “recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not provide significantly more because this type of recitation is equivalent to the words “apply it”, and (2) “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice, commercial interaction, or managing personal behavior or relationships or interactions between people, mental process, or mathematical calculation) does not integrate a judicial exception into a practical application or provide significantly more”.
Claims 8 and 15 are substantially similar to claim 1, thus, they are rejected on similar grounds.
Claim 8 recites that additional elements of “A system comprising: a producer application; an event streaming platform comprising a cluster of servers hosting a plurality of partitions; a consuming application; and a posting application; wherein the system is configured to:”
Claim 15 recites the additional elements of “A non-transitory computer readable storage medium, including instructions stored thereon, which instructions when read and executed by one or more computers cause the one or more computers to perform steps comprising:”
For similar reasons as explained above with regard to claim 1, under Step 2A, prong two, these additional elements are merely applying generic computer components to implement the abstract idea. Under Step 2B, when viewing the additional elements individually and in combination, the additional elements do not amount to an inventive concept amounting to significantly more than the judicial exception itself as the claimed computer-related technologies are mere tools for implementing the abstract idea as explained with regard to claim 1.
Dependent claims 2-3, 9-10, and 16-17 merely limit the abstract idea and do not recite any further additional elements beyond the cited abstract idea and the elements addressed above, thus, they do not amount to significantly more. The dependent claims are abstract for the reasons presented above because there are no additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception when considered both individually and as an ordered combination. Thus, the dependent claims are directed to an abstract idea. (Step 2B: No)
Therefore, claims 1-3, 8-10, and 15-17 are not patent-eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 8-10, and 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou, U.S. Patent Application Publication Number 2021/0248613; in view of Jones, U.S. Patent Application Publication Number 2022/0198501; in view of Lou, U.S. Patent Application Publication Number 2019/0384835.
As per claim 1,
Zhou explicitly teaches:
A method comprising: receiving, at a producer application, balance transfer request data from multiple data sources;
(Zhou US20210248613 at paras. 40-42, 92-94 and 155-159) ("client devices 150, client systems 190, and/or online resources 140 may interact with brokers 234 with (1) a producer API, which allows publishing streams of records; (2) a consumer API, which allows to subscribe to topics and processes streams of records; (3) a connector API, executing the reusable producer and consumer APIs that can link the topics to the existing applications, and/or (4) stream API, which converts the input streams to output and produces the result." "As further described in connection with FIG. 11, to monitor event data streams service system 105 may configure stream capture applications and/or setup a Kafka cluster. Data streams may include transactions, service requests, information requests, purchase orders, among other interactions with service system 105. [0158] In step 804, service system may receive and/or identify an event from an external domain or server. The identified event may be part of the data streams monitored in step 802. The event may be for example a transaction request include electronic payment information, a merchant, product(s), and an amount." "[0041] In some implementations, the disclosed systems and methods may improve the technical field of automated electronic payment fraud detection. For example, the disclosed systems and methods may be applicable to data streams with data transactions to capture when a user performs high frequency transactions in a short amount of time. For such applications, the disclosed systems and methods may process transaction information in real-time and calculate frequency, origin, and amount of transactions quickly, and accurately. Further, for such applications the disclosed systems and methods may provide a platform for stable and scalable analysis that can be incorporated in data streams from different sources.")
formatting, by the producer application, the balance transfer request data into a plurality of balance transfer data records;
(Zhou US20210248613 at paras. 92-94, 155-159, 189-191) ("[0093] Moreover, different elements of system 100 may interact with brokers 234 using API's supported by stream operator 110. For example, client devices 150, client systems 190, and/or online resources 140 may interact with brokers 234 with (1) a producer API, which allows publishing streams of records; (2) a consumer API, which allows to subscribe to topics and processes streams of records; (3) a connector API, executing the reusable producer and consumer APIs that can link the topics to the existing applications, and/or (4) stream API, which converts the input streams to output and produces the result. In some embodiments, the consumer and producer APIs may build on other stream processing elements, such as filter/normalizer 232, and my offer a reference implementation for consumers and producers clients in Java" "[0156] Referring now to FIG. 8, there is shown a flow chart describing an alert generation process, consistent with disclosed embodiments. Process 800 may be carried out by service system 105 in real-time in response to receiving data streams from client systems 190 and/or other networked elements of system 100. For example, process 800 may be carried out by stream operator 110 and real-time state calculator 120 in real-time as events in data streams are being received. [0157] In step 802, service system 105 may be monitoring event data streams. As further described in connection with FIG. 11, to monitor event data streams service system 105 may configure stream capture applications and/or setup a Kafka cluster. Data streams may include transactions, service requests, information requests, purchase orders, among other interactions with service system 105. [0158] In step 804, service system may receive and/or identify an event from an external domain or server. The identified event may be part of the data streams monitored in step 802. The event may be for example a transaction request include electronic payment information, a merchant, product(s), and an amount." "[0190] The parameters selected for the configuration of the stream capture applications in step 1102 may include required configuration parameters such as “application.id” and “bootstrap.servers”." "Moreover, parameters for configuration of the capture applications may include producer configuration parameters (e.g., “Naming, Default Values, enable.auto.commit, rocksdb.config.setter”) or Recommended configuration parameters for resiliency (e.g, “replication.factor”).")
streaming, by the producer application, the plurality of balance transfer data records to an event streaming platform comprising a cluster of servers hosting a plurality of partitions;
(Zhou US20210248613 at paras. 92-94 and 155-159) (""client devices 150, client systems 190, and/or online resources 140 may interact with brokers 234 with (1) a producer API, which allows publishing streams of records; (2) a consumer API, which allows to subscribe to topics and processes streams of records; (3) a connector API, executing the reusable producer and consumer APIs that can link the topics to the existing applications, and/or (4) stream API, which converts the input streams to output and produces the result."" ""As further described in connection with FIG. 11, to monitor event data streams service system 105 may configure stream capture applications and/or setup a Kafka cluster. Data streams may include transactions, service requests, information requests, purchase orders, among other interactions with service system 105. [0158] In step 804, service system may receive and/or identify an event from an external domain or server. The identified event may be part of the data streams monitored in step 802. The event may be for example a transaction request include electronic payment information, a merchant, product(s), and an amount."" ""[0088] Brokers 234 may include stream-processing software. For example, in some embodiments, stream operator 110 may implement a processing platform such as Apache Kafka®, In such embodiments, brokers 234 may include one or more servers running on the processing platform, Brokers 234 may process data streams, before or after filter/normalizer 232, and publish data into topics within brokers 234. [0089]
In such embodiments, brokers 234 may be configurable to extract and store key-value messages that come events in data streams from client systems 190. Brokers 234 may divide data into different “partitions” within different “topics”, Within a partition, brokers 234 may order key-value messages by their offsets (the position of a message within a partition), and indexed and stored together with a timestamp, which may be determined by a timer 238."")
formatting, by the event streaming platform, the plurality of balance transfer data records into one or more partitions of the plurality of partitions in a corresponding topic;
(Zhou US20210248613 at paras. 157-159 and 195-198) ("[0196] In step 1110, service system 105 may process events sequentially in the stream. Based on the key/value pairs and the topic configuration and partition, the process may be performed efficiently by using parallelized brokers 234 based on different topics. In step 1110, processing may be performed in parallel by using partitions of data that can be processed concurrently without a defined order. [0197] In step 1112, service system 105 may associate the key/value events with an account and their related state variables. The state variables may be updated based on the key/values and service system 105 may store a single copy of state variables, to minimize memory utilization, and configure variables to enforce the ability to access them using operators with O (1) complexity. Thus, in step 1112, after configuring capture applications and processing sequentially events, service system 105 may store low complexity state variables that monitor data streams." "As further described in connection with FIG. 11, to monitor event data streams service system 105 may configure stream capture applications and/or setup a Kafka cluster. Data streams may include transactions, service requests, information requests, purchase orders, among other interactions with service system 105. [0158] In step 804, service system may receive and/or identify an event from an external domain or server. The identified event may be part of the data streams monitored in step 802. The event may be for example a transaction request include electronic payment information, a merchant, product(s), and an amount.")
formatting, by the consuming application, the balance transfer data records as parameterized data;
(Zhou at paras. 62-64, 79-87) ("[0063] In some embodiments, client request interface 130 may include processors that perform authentication functions of client devices 150 or client systems 190. For example, client request interface 130 may identify requests based on client IDs and/or a secure token that is then compared to alert notices that are generated by, for example, real-time state calculator 120. In some embodiments, client request interface 130 may include processors configured to encode content and packet content in different formats. In some embodiments, client request interface 130 may include multiple core processors to handle concurrently multiple operations and/or streams. For example, client request interface 130 may include parallel processing units to concurrently handle requests of multiple client devices 150." "[0080] In some embodiments, filter/normalizer 232 may be configured with one or more operators that transform an input stream into an output stream. Operators in filter/normalizer 232 may process each event in data streams to modify at least one aspect and then submitting the event only if it meets the operator requirement. For example, every event in a data stream may be configured to contain information like account number, transaction date, transaction time, and transaction price. In such embodiments the event can be represented by the following 4-variable “Transaction Record” type: [0081] TransactionRecord= [0082] rstring account, [0083] rstring date, [0084] rstring time, [0085] decimal64 price; where rstring is a sequence of raw bytes that supports string processing when the character encoding is known, and decimal64 is the IEEE 754 decimal 64-bit floating point number.")
sending, by the consuming application, a first part of one of the plurality of balance transfer data records to the [real-time payment] network as the parameterized data of the exposed API in response to the call,
(Zhou at paras. 62-64, 79-87) ("[0063] In some embodiments, client request interface 130 may include processors that perform authentication functions of client devices 150 or client systems 190. For example, client request interface 130 may identify requests based on client IDs and/or a secure token that is then compared to alert notices that are generated by, for example, real-time state calculator 120. In some embodiments, client request interface 130 may include processors configured to encode content and packet content in different formats. In some embodiments, client request interface 130 may include multiple core processors to handle concurrently multiple operations and/or streams. For example, client request interface 130 may include parallel processing units to concurrently handle requests of multiple client devices 150." "[0080] In some embodiments, filter/normalizer 232 may be configured with one or more operators that transform an input stream into an output stream. Operators in filter/normalizer 232 may process each event in data streams to modify at least one aspect and then submitting the event only if it meets the operator requirement. For example, every event in a data stream may be configured to contain information like account number, transaction date, transaction time, and transaction price. In such embodiments the event can be represented by the following 4-variable “Transaction Record” type: [0081] TransactionRecord= [0082] rstring account, [0083] rstring date, [0084] rstring time, [0085] decimal64 price; where rstring is a sequence of raw bytes that supports string processing when the character encoding is known, and decimal64 is the IEEE 754 decimal 64-bit floating point number.")
subscribing, by the consuming application, to the corresponding topic via a consumer API;
(Zhou US20210248613 at paras. 92-94) ("[0093] Moreover, different elements of system 100 may interact with brokers 234 using API's supported by stream operator 110. For example, client devices 150, client systems 190, and/or online resources 140 may interact with brokers 234 with (1) a producer API, which allows publishing streams of records; (2) a consumer API, which allows to subscribe to topics and processes streams of records; (3) a connector API, executing the reusable producer and consumer APIs that can link the topics to the existing applications, and/or (4) stream API, which converts the input streams to output and produces the result. In some embodiments, the consumer and producer APIs may build on other stream processing elements, such as filter/normalizer 232, and my offer a reference implementation for consumers and producers clients in Java. In such embodiments, the underlying messaging protocol may be a binary protocol that developers can use to write theft own consumer or producer clients in any programming language. Further, in such embodiments the API's may be executed and/or supported by stream processor 230. However, these API's may be hosted by other elements of service system 105 or may be hosted remotely, for example by online resources 140.")
sending, by the consuming application, a second part of the one of the plurality of balance transfer data records to a [posting] application via an exposed producer API; and
(Zhou US20210248613 at paras. 62-64, 79-87) ("[0063] In some embodiments, client request interface 130 may include processors that perform authentication functions of client devices 150 or client systems 190. For example, client request interface 130 may identify requests based on client IDs and/or a secure token that is then compared to alert notices that are generated by, for example, real-time state calculator 120. In some embodiments, client request interface 130 may include processors configured to encode content and packet content in different formats. In some embodiments, client request interface 130 may include multiple core processors to handle concurrently multiple operations and/or streams. For example, client request interface 130 may include parallel processing units to concurrently handle requests of multiple client devices 150." "[0080] In some embodiments, filter/normalizer 232 may be configured with one or more operators that transform an input stream into an output stream. Operators in filter/normalizer 232 may process each event in data streams to modify at least one aspect and then submitting the event only if it meets the operator requirement. For example, every event in a data stream may be configured to contain information like account number, transaction date, transaction time, and transaction price. In such embodiments the event can be represented by the following 4-variable “Transaction Record” type: [0081] TransactionRecord= [0082] rstring account, [0083] rstring date, [0084] rstring time, [0085] decimal64 price; where rstring is a sequence of raw bytes that supports string processing when the character encoding is known, and decimal64 is the IEEE 754 decimal 64-bit floating point number.")
Zhou does not explicitly teach, however, Jones does teach:
calling, by the consuming application, an exposed application program interface (API) from a real-time payment gateway of a real-time payment network;
(Jones US20220198501 at paras. 18-20, 61-63, 85-87) ("[0019] In other instances, and through an implementation of one or more real-time payment (RTP) technologies, the merchant computing system may generate elements of messaging data that request, from the customer, payment for the one or more purchased products or services in real-time and contemporaneously with the initiated purchase transaction, e.g., a “real-time” payment. For example, the generated elements of messaging data may populate, and be maintained within, message fields of a request-for-payment (RFP) message formatted and structured in accordance with one or more standardized data-exchange protocols, such as the ISO 20022 standard for electronic data exchange between financial institutions." "[0086] Referring to FIG. 3B, a programmatic interface associated with one or more application programs executed at client device 102, such as an application programming interface (API) 346 associated with mobile banking application 108, may receive notification data 326 and perform operations that cause client device 102 to execute mobile banking application 108 (e.g., through a generation of a programmatic command, etc.). Upon execution by the one or more processors of client device 102, executed mobile banking application 108 may receive notification data 326 from API 346, and executed mobile banking application 108 may perform operations that store notification data 326, which includes payment notification 324 and review notification 344, within a corresponding portion of a tangible, non-transitory memory, e.g., within a portion of memory 105.")
sending, [by the consuming application,] a first part of one of the plurality of balance transfer data records to the real-time payment network as the parameterized data of the exposed API in response to the call, wherein the real-time payment network initiates settlement of a payment indicated in the first part of the one of the plurality of balance transfer data records;
(Jones US20220198501 at paras. 18-20, 61-63, 85-87) ("[0018] The merchant computing system may also perform operations that authorize the initiated purchase transaction using the available payment instrument, e.g., based on portions of the generated transaction data and the obtained payment data. By way of example, the merchant computing may generate a transaction-processing message that includes portions of the data characterizing the now-authorized transaction, and may transmit the transaction-processing message to a computing system associated with a transaction processing network (e.g., a payment rail or payment network), which may perform additional operations that reconcile, settle, and clear the authorized purchase transaction in accordance with the transaction data. For instance, the merchant computing system may generate, and transmit, the transaction-processing message to the computing system associated with the transaction processing network at predetermined intervals subsequent to the initiation and authorization of the purchase transaction, such as, but not limited to, at a completion of a business day or a completion of a calendar day. Further, and upon completion of the reconciliation, clearance, and settlement processes at the predetermined intervals, funds associated with the purchase transaction may be credited to an account held by the merchant, and may be accessible to that merchant for withdrawal from the merchant's financial institution.")
sending, [by the consuming application, a second part of the one of the plurality of] balance transfer data records to a posting application via an exposed producer API; and
(Jones US20220198501 at paras. 21-23) ("[0022] Upon interception of an ISO-20022-compliant RFP message associated with a transaction involving the merchant and the customer, such as the purchase transaction described herein, the FI computing system provision a payment notification to an application program executed at the client device that, upon presentation within a corresponding digital interface, prompts the customer to provide an approval of the real-time payment requested by the merchant, e.g., contemporaneously with the initiated transaction. Certain of the exemplary RTP processes described herein may, when implemented collectively by the computing systems of the financial institution, of the merchant, and of the customer, facilitate a reconciliation, settlement, and clearance of the initiated transaction in real-time and without conventional payment rails and transaction processing networks, and may reduce instances of fraudulent activity and chargebacks characteristic of the transaction reconciliation, clearance, and settlement processes involving payment rails and transaction processing-messages, e.g., as the merchant computing system receives the customer's approval of the requested payment contemporaneously with the initiation of the transaction and prior to any provisioning of the one or more purchased products or services to the customer.")
Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Zhou and Jones, because it allows for improved computer-implemented systems and processes that assess initiated data exchanges in real-time based on structured messaging data. (Jones at Abstract and paras. 2-7).
Zhou and Jones do not explicitly teach, however, Luo does teach:
replicating, by the event streaming platform, the plurality of balance transfer data records across the plurality of partitions;
(Luo US20190384835 at paras. 17-19) ("[0018] For each topic, a Kafka cluster maintains a partitioned log 200 as illustratively depicted in FIG. 2, where records (also referred to herein interchangeably as messages) in a partition are each assigned a sequential identifier (id), called an offset, when published that uniquely identifies each record within the partition. When a Kafka consumer consumes a Kafka message, the message is consumed in ascending order of offset to a specific partition (i.e., from small to big), not topic. There may be different types of offsets for a partition. A current position offset 205 is generated when a consumer consumes a Kafka message and the offset is increased by 1. A last committed offset 210 is the last acknowledged offset and can be characterized by two modes, automatic and manual. In the automatic mode, the offset will be automatically committed when a message is consumed. In the manual mode, the offset can be acknowledged by an API. A high watermark type offset refers to the offset where all messages below are replicated and is the highest offset a Kafka consumer can consume. FIG. 2 further includes a log end offset 220 that is at the end of the partition 200. [0019] FIG. 3 is an illustrative depiction of one example process 300 herein. Process 300 is one embodiment of an ingestion process and includes, at a high level, two steps or operations 305 and 310. An example associated with FIG. 3 in one embodiment includes a data stream comprising a Kafka data stream and a target data storage including a Hadoop cluster (e.g., HDFS+Hive ACID tables). Operation 305 includes a streaming writer. In the example of FIG. 3, the streaming writer 305 writes Kafka events (i.e., messages) to ingestion HDFS folders or other data structures organized by the topic and partition of the Kafka messages. Additionally, the writing of the Kafka messages is processed as a transaction, wherein the Kafka offset associated with the writing operation is acknowledged when the HDFS write is completed or finished. By acknowledging the Kafka offset only after the HDFS write is finished, process 300 can avoid having to include a data recovery mechanism or the streaming writer since either the write operation occurs (and is acknowledged) or it does not happen (i.e., no acknowledgement). In some embodiments, operation 305 may include merging the Kafka events, which are typically small files, and writing the merged files to a write-ahead log (WAL). In the example of FIG. 3, one WAL is used for each Kafka partition to allow acknowledgement. In some aspects, merging the Kafka files limits a total number of files and may thus avoid overburdening a Namenode. In some instances, the merging of the Kafka files may have the benefit of increasing throughput (e.g., about 10×).")
writing, by the event streaming platform to a consuming application, the balance transfer data records sequentially in the order in which they are added to the one or more partitions of the plurality of partitions in the corresponding topic;
(Luo US20190384835 at paras. 17-19) ("[0017] Prior to discussing the features of the ingestion process(es) herein, a number of aspects will be introduced. In some instances, the streaming data that may be received and processed by methods and systems herein may be Apache Kafka® data events, messages, or records. Kafka is run as a cluster on one or more servers that can serve a plurality of datacenters. A Kafka cluster stores streams of records produced by publishers for consumption by consumer applications. A Kafka publisher publishes a stream of records or messages to one or more Kafka topics and a Kafka consumer subscribes to one or more Kafka topics. A Kafka data stream is organized in a category or feed name, called a topic, to which the messages are stored and published. Kafka topics are divided into a number of partitions that contain messages in an ordered, unchangeable sequence. Each topic may have multiple partitions. Multiple consumers may be needed or desired to read from the same topic to, for example, keep pace with the publishing rate of producers of the topic. Consumers may be organized into consumer groups, where topics are consumed in the consumer groups and each topic can only be consumed in a specific consumer group. However, one consumer in a specific consumer group might consume multiple partitions of the same topic. That is, each consumer in a consumer group might receive messages from a different subset of the partitions in a particular topic. FIG. 1 is an illustrative depiction of some aspects of a Kafka data stream including a topic T1 at 105 that is divided into four partitions (Partition 0, Partition 1, Partition 2, and Partition 3). As shown in the example of FIG. 1, a consumer group 110 includes two consumers (i.e., Consumer 1, Consumer 2), where topic T1 is consumed by consumer group 110 and the consumers 115, 120 in consumer group 110 each subscribes to and consumes multiple partitions of topic T1 (105). [0018] For each topic, a Kafka cluster maintains a partitioned log 200 as illustratively depicted in FIG. 2, where records (also referred to herein interchangeably as messages) in a partition are each assigned a sequential identifier (id), called an offset, when published that uniquely identifies each record within the partition. When a Kafka consumer consumes a Kafka message, the message is consumed in ascending order of offset to a specific partition (i.e., from small to big), not topic. There may be different types of offsets for a partition. A current position offset 205 is generated when a consumer consumes a Kafka message and the offset is increased by 1. A last committed offset 210 is the last acknowledged offset and can be characterized by two modes, automatic and manual. In the automatic mode, the offset will be automatically committed when a message is consumed. In the manual mode, the offset can be acknowledged by an API. A high watermark type offset refers to the offset where all messages below are replicated and is the highest offset a Kafka consumer can consume. FIG. 2 further includes a log end offset 220 that is at the end of the partition 200.")
sending, by the event streaming platform, the balance transfer data records in a sequential order in which the partitioned data was received to the consuming application;
(Luo US20190384835 at paras. 17-19) ("[0017] Prior to discussing the features of the ingestion process(es) herein, a number of aspects will be introduced. In some instances, the streaming data that may be received and processed by methods and systems herein may be Apache Kafka® data events, messages, or records. Kafka is run as a cluster on one or more servers that can serve a plurality of datacenters. A Kafka cluster stores streams of records produced by publishers for consumption by consumer applications. A Kafka publisher publishes a stream of records or messages to one or more Kafka topics and a Kafka consumer subscribes to one or more Kafka topics. A Kafka data stream is organized in a category or feed name, called a topic, to which the messages are stored and published. Kafka topics are divided into a number of partitions that contain messages in an ordered, unchangeable sequence. Each topic may have multiple partitions. Multiple consumers may be needed or desired to read from the same topic to, for example, keep pace with the publishing rate of producers of the topic. Consumers may be organized into consumer groups, where topics are consumed in the consumer groups and each topic can only be consumed in a specific consumer group. However, one consumer in a specific consumer group might consume multiple partitions of the same topic. That is, each consumer in a consumer group might receive messages from a different subset of the partitions in a particular topic. FIG. 1 is an illustrative depiction of some aspects of a Kafka data stream including a topic T1 at 105 that is divided into four partitions (Partition 0, Partition 1, Partition 2, and Partition 3). As shown in the example of FIG. 1, a consumer group 110 includes two consumers (i.e., Consumer 1, Consumer 2), where topic T1 is consumed by consumer group 110 and the consumers 115, 120 in consumer group 110 each subscribes to and consumes multiple partitions of topic T1 (105). [0018] For each topic, a Kafka cluster maintains a partitioned log 200 as illustratively depicted in FIG. 2, where records (also referred to herein interchangeably as messages) in a partition are each assigned a sequential identifier (id), called an offset, when published that uniquely identifies each record within the partition. When a Kafka consumer consumes a Kafka message, the message is consumed in ascending order of offset to a specific partition (i.e., from small to big), not topic. There may be different types of offsets for a partition. A current position offset 205 is generated when a consumer consumes a Kafka message and the offset is increased by 1. A last committed offset 210 is the last acknowledged offset and can be characterized by two modes, automatic and manual. In the automatic mode, the offset will be automatically committed when a message is consumed. In the manual mode, the offset can be acknowledged by an API. A high watermark type offset refers to the offset where all messages below are replicated and is the highest offset a Kafka consumer can consume. FIG. 2 further includes a log end offset 220 that is at the end of the partition 200.")
posting, by the posting application, a hard post of the payment indicated in the first part of the one of the plurality of balance transfer data records to an account ledger without requiring a prior soft post.
(Luo US20190384835 at paras. 17-19) ("[0019] FIG. 3 is an illustrative depiction of one example process 300 herein. Process 300 is one embodiment of an ingestion process and includes, at a high level, two steps or operations 305 and 310. An example associated with FIG. 3 in one embodiment includes a data stream comprising a Kafka data stream and a target data storage including a Hadoop cluster (e.g., HDFS+Hive ACID tables). Operation 305 includes a streaming writer. In the example of FIG. 3, the streaming writer 305 writes Kafka events (i.e., messages) to ingestion HDFS folders or other data structures organized by the topic and partition of the Kafka messages. Additionally, the writing of the Kafka messages is processed as a transaction, wherein the Kafka offset associated with the writing operation is acknowledged when the HDFS write is completed or finished. By acknowledging the Kafka offset only after the HDFS write is finished, process 300 can avoid having to include a data recovery mechanism or the streaming writer since either the write operation occurs (and is acknowledged) or it does not happen (i.e., no acknowledgement). In some embodiments, operation 305 may include merging the Kafka events, which are typically small files, and writing the merged files to a write-ahead log (WAL). In the example of FIG. 3, one WAL is used for each Kafka partition to allow acknowledgement. In some aspects, merging the Kafka files limits a total number of files and may thus avoid overburdening a Namenode. In some instances, the merging of the Kafka files may have the benefit of increasing throughput (e.g., about 10×).")
Therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Zhou, Jones, and Luo, because it allows for an improved distributed processing system and method solution that, in some aspects, can efficiently handle streaming data events having a variety of different data structure schemas, exhibit a minimal need for data recovery mechanisms, is optimized for query and purge operations, and reduces a level of compaction used to efficiently store data in the data store. (Luo at Abstract and paras. 1-3).
As per claim 2,
Zhou explicitly teaches:
wherein at least part of the balance request transfer data is received from a public-facing web interface.
(Zhou US20210248613 at paras. 52-54 and 136-138) ("The normalization rules may include transforming all the time stamps from events into GMT or transforming all the amount information to U.S. dollars. Moreover, in some embodiments, client systems 190 may include aggregator website or a search engine, which may pull frequently information from service system 105. Alternatively, or additionally, client systems 190 may host e-commerce websites.")
As per claim 3,
Zhou explicitly teaches:
wherein the producer application is one instance of a plurality of producer applications.
(Zhou US20210248613 at paras. 92-94) ("[0093] Moreover, different elements of system 100 may interact with brokers 234 using API's supported by stream operator 110. For example, client devices 150, client systems 190, and/or online resources 140 may interact with brokers 234 with (1) a producer API, which allows publishing streams of records; (2) a consumer API, which allows to subscribe to topics and processes streams of records; (3) a connector API, executing the reusable producer and consumer APIs that can link the topics to the existing applications, and/or (4) stream API, which converts the input streams to output and produces the result.")
Claims 8-10 and 15-17 are substantially similar to claims 1-3, thus, they are rejected on similar grounds.
Response to Arguments
Applicant’s arguments filed on October 16, 2025 have been fully considered but are not persuasive for the following reasons:
With respect to Applicant’s arguments as to the § 101 rejections for now pending claims 1-3, 8-10, and 15-17, Examiner notes the following:
Applicant argues that the claims are not directed to an abstract idea.
Examiner disagrees, however, and notes that the claim as a whole recites a method that, under its broadest reasonable interpretation, covers collecting, analyzing, and transmitting data for the facilitation of a transfer of resources, such as cash or money, between users. This is a fundamental economic practice of a financial transaction; a commercial interaction, such as for business relations; and managing personal behavior or relationships or interactions between people, which are certain methods of organizing human activity. Thus, the claims recite an abstract idea.
Applicant argues that the amended features would integrate the abstract idea into a practical application.
Examiner disagrees, however, and notes that the additional elements of the computer system - a “producer application”, “an event streaming platform comprising a cluster of servers”, “consuming application”, “an exposed application program interface (API) from a real-time payment gateway of a real-time payment network”, “a consumer API”, and “a posting application”, to perform the steps of “formatting”, “replicating”, “writing”, “subscribing”, and “posting”, in all steps is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component. The claims at issue covers collecting, analyzing, and transmitting data for the facilitation of a transfer of resources, such as cash or money, between users. The claims invoke the “producer application”, “an event streaming platform comprising a cluster of servers”, “consuming application”, “an exposed application program interface (API) from a real-time payment gateway of a real-time payment network”, “a consumer API”, and “a posting application”, to perform the steps of “formatting”, “replicating”, “writing”, “subscribing”, and “posting”, merely as tools to execute the abstract idea. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mental process) does not integrate a judicial exception into a practical application. (MPEP 2106.05 (f))
Examiner notes that the stated problems of inaccurate and unnecessary process steps is not a technical problem, and the claimed solution is not a technical solution. In the claim, the solution of providing a consistent and methodical way to transfer critical records to maintain streaming order is part of the abstract idea, as it is merely involves collecting, analyzing, and transmitting data for the facilitation of a transfer of resources, such as cash or money, between users. Furthermore, the data manipulation and analysis could be completed mentally or manually by paper or pen.
Finally, the Applicant argues that the claims are directed to significantly more than the abstract idea.
Examiner disagrees, however, and notes that, as explained above in the instant rejection under 35 U.S.C. § 101, that the additional elements do not amount to an inventive concept. The additional elements of the computer system - “producer application”, “an event streaming platform comprising a cluster of servers”, “consuming application”, “an exposed application program interface (API) from a real-time payment gateway of a real-time payment network”, “a consumer API”, and “a posting application”, to perform the steps of “formatting”, “replicating”, “writing”, “subscribing”, and “posting” are merely generic computer components performing their well-known basic functions of collecting, analyzing, and transmitting data for the facilitation of a transfer of resources, such as cash or money, between users. Per the specification, the recited computer elements are described only at a high level of generality, (see Spec. at paras. [0054], [0073]). In view of the specification, the application of the computer elements and machine learning is merely being applied to the abstract idea.
The other limitations which are simply supporting the abstract idea correspond to insignificant extra-solution activity which do not transform the abstract idea into a patent eligible subject matter. Also, the functionality here is already present in the recited hardware, which is merely routine and conventional. Collecting, analyzing, and transmitting data is routine and conventional. There is no technological problem or solution identified. This is merely a business solution to transfer data between devices. (MPEP 2106.05 (f))
With respect to Applicant’s arguments as to the § 103 rejections for now pending claims 1-3, 8-10, and 15-17, Examiner notes that the arguments are moot in light of the new grounds for rejection above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and is available for review on Form PTO-892 Notice of References Cited.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MERRITT J HASBROUCK whose telephone number is (571)272-3109. The examiner can normally be reached M-F 9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christine Tran can be reached on 571-272-8103. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MERRITT J HASBROUCK/Examiner, Art Unit 3695
/CHRISTINE M Tran/Supervisory Patent Examiner, Art Unit 3695