Prosecution Insights
Last updated: April 19, 2026
Application No. 18/153,077

EVENT FANNING PLATFORM FOR STREAMING NETWORK EVENT DATA TO CONSUMER APPLICATIONS

Final Rejection §102§103
Filed
Jan 11, 2023
Examiner
BULLOCK JR, LEWIS ALEXANDER
Art Unit
2199
Tech Center
2100 — Computer Architecture & Software
Assignee
Chime Financial Inc.
OA Round
2 (Final)
23%
Grant Probability
At Risk
3-4
OA Rounds
3y 11m
To Grant
79%
With Interview

Examiner Intelligence

Grants only 23% of cases
23%
Career Allow Rate
15 granted / 65 resolved
-31.9% vs TC avg
Strong +56% interview lift
Without
With
+56.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
12 currently pending
Career history
77
Total Applications
across all art units

Statute-Specific Performance

§101
20.1%
-19.9% vs TC avg
§103
43.7%
+3.7% vs TC avg
§102
17.4%
-22.6% vs TC avg
§112
14.6%
-25.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 65 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by THOMAS (U.S. Publication 2024/0195675). The applied reference has a common assignee / joint inventor with the instant application. Based upon the earlier effectively filed date of the reference, it constitutes prior art under 35 U.S.C. 102(a)(2). This rejection under 35 U.S.C. 102(a)(2) might be overcome by: (1) a showing under 37 CFR 1.130(a) that the subject matter disclosed in the reference was obtained directly or indirectly from the inventor or a joint inventor of this application and is thus not prior art in accordance with 35 U.S.C. 102(b)(2)(A); (2) a showing under 37 CFR 1.130(b) of a prior public disclosure under 35 U.S.C. 102(b)(2)(B) if the same invention is not being claimed; or (3) a statement pursuant to 35 U.S.C. 102(b)(2)(C) establishing that, not later than the effective filing date of the claimed invention, the subject matter disclosed in the reference and the claimed invention were either owned by the same person or subject to an obligation of assignment to the same person or subject to a joint research agreement. As to claim 1, THOMAS teaches a method comprising: receiving, from a consumer application, an event request indicating a requested network event from among a plurality of network events hosted by a network event data streaming platform (par. 0017, “As just mentioned, the event bus system can utilize a network event data streaming platform for distributing network event data to requesting components or third-party systems. For example, the event bus system receives an event request from a network component or a third-party system and provides network event data based on the event request. Along these lines, rather than requiring developer curation of network event data to locate and collect data for a received event request, the event bus system can instead make network event data available in a self-service fashion. More specifically, the event bus system can utilize a network event data streaming platform that readily provides network events to requesting network components/systems from respective event platform sources where they are housed for distribution. Thus, in response to receiving a self-service event request, the event bus system can identify the event platform source for the requested event and can provide the requested event from the identified source. In some cases, the event bus system can further make network events discoverable throughout the network event data streaming platform (e.g., at their respective sources) such that requesting network components/systems can view or otherwise identify network events available for request.; par. 0076-0079, “As further illustrated in FIG. 3, the network event data streaming platform includes an event fanning platform 316. Indeed, as mentioned above, the event bus system 106 utilizes the event fanning platform 316 to generate low-latency fanned data streams to broadcast network events to requesting network components or third-party systems. For example, the event bus system 106 receives a network event request and determines a latency requirement for the request. Based on determining that the latency requirement is below a latency threshold, the event bus system 106 further determines that using the data lake 314 is not a viable option to provide the requested network event at the required speed (or in the required time) indicated by the latency of the request. Accordingly, the event bus system 106 utilizes the event fanning platform 316 to generate a fanned data stream for the requested event for access by the requesting component/system. [0077] In some embodiments, the event fanning platform 316 fans out network events to consumer application data streams (e.g., low-latency fanned data streams), such as the consumer application data stream 322 on the consumer application server 320. For instance, the event fanning platform 316 includes a processor that reads from a single data stream (e.g., the global event data stream 310) and writes to multiple streams based on a set of declarative configurations dictating what events need to be written to which consumer application data stream (or consumer application server). [0078] To elaborate, based on receiving a network event request, the event bus system 106 determines or identifies an event fanning configuration (as defined by the request or a previous/initial request) that indicates a configuration for one or more requested network events. Specifically, an event fanning configuration indicates a destination data stream (and its streaming protocol or stream type, such as Kineses or Kafka) along with network events to provide to the destination data stream. In one or more embodiments, the event fanning platform 316 can update an event fanning configuration dynamically based on a new or updated event request, based on permissions associated with a requesting component/system, and/or according to throughput metrics and server capacity…[0079] Based on an event fanning configuration indicating one or more short-retention network events, the event fanning platform 316 generates a corresponding low-latency fanned data stream for the requested short-retention network events. The event fanning platform 316 further provides or broadcasts the low-latency fanned data stream to a requesting component, such as the consumer application server 320 or a third-party data server from among the third-party data servers 326. For instance, the event fanning platform 316 provides or broadcasts the fanned data stream to the consumer application data stream 322 on the consumer application server 320. Indeed, the consumer application server 320 generates and provides the event request including an event fanning configuration, whereupon the event fanning platform 316 fans out the relevant events to the appropriate consumer application data stream 322.”); determining a request volume within the network event data streaming platform (par. 0095, “[0095] If the event bus system 106 determines that the requested network event cannot be performed in batch mode (e.g., because the latency exceeds a batch mode threshold), the event bus system 106 performs an act 412 to determine a network event volume for the event request. In particular, the event bus system 106 determines (or receives an indication of) a volume or a number of network events (e.g., of the type indicated by the requested event) that the network event data streaming platform has available within the data lake 314 and/or within fanned data streams. Thus, event bus system 106 determines busy and/or available resources for provisioning new events if necessary. In some cases, the event bus system 106 determines a volume or a number of network events requested by the received self-service transaction request as part of the resource determination. Additionally, the event bus system 106 performs an act 414 to orchestrate creation of a Kinesis stream and a corresponding configuration for the stream using the event fanning platform 316 based on the volume(s). Indeed, as mentioned the event bus system 106 determines the event fanning configuration from the request as indicated by a requesting component/system (e.g., the consumer application server 320 or one of the third-party data servers 326).”; 0099, “[0099] In addition, the event data catalog 506 passes the information for the event(s) to a data stream orchestration engine 508 to determine a volume of network events streamed (or made available) by the network event data streaming platform (e.g., via low-latency fanned data streams). In response, the data stream orchestration engine 508 identifies the volume of network events within the network event data streaming platform that match the requested event (e.g., 3000 events per second). The data stream orchestration engine 508 passes the event volume information to the event transformation engine 504 to determine a number of server shards to use/dedicate for the network events of the self-service event request.”; generating, using an event fanning platform in response to determining the request volume, a consumer application data stream specific to the requested network event and tied to a lifecycle of the consumer application by allocating resources from the network event data streaming platform to the consumer application data stream according to the request volume ( “[0095] If the event bus system 106 determines that the requested network event cannot be performed in batch mode (e.g., because the latency exceeds a batch mode threshold), the event bus system 106 performs an act 412 to determine a network event volume for the event request. In particular, the event bus system 106 determines (or receives an indication of) a volume or a number of network events (e.g., of the type indicated by the requested event) that the network event data streaming platform has available within the data lake 314 and/or within fanned data streams. Thus, event bus system 106 determines busy and/or available resources for provisioning new events if necessary. In some cases, the event bus system 106 determines a volume or a number of network events requested by the received self-service transaction request as part of the resource determination. Additionally, the event bus system 106 performs an act 414 to orchestrate creation of a Kinesis stream and a corresponding configuration for the stream using the event fanning platform 316 based on the volume(s). Indeed, as mentioned the event bus system 106 determines the event fanning configuration from the request as indicated by a requesting component/system (e.g., the consumer application server 320 or one of the third-party data servers 326).”; 0099, “[0099] In addition, the event data catalog 506 passes the information for the event(s) to a data stream orchestration engine 508 to determine a volume of network events streamed (or made available) by the network event data streaming platform (e.g., via low-latency fanned data streams). In response, the data stream orchestration engine 508 identifies the volume of network events within the network event data streaming platform that match the requested event (e.g., 3000 events per second). The data stream orchestration engine 508 passes the event volume information to the event transformation engine 504 to determine a number of server shards to use/dedicate for the network events of the self-service event request.; [0084] The consumer application server 320 thus executes a consumer application using data within the consumer application data stream 322. Consumer applications include applications for tracking device interactions, reporting on network stability/data loss, identifying login attempts, generating financial reports, executing asset transfers, checking account credit, or performing some other transaction. The event bus system 106 maintains the life cycle of the consumer application data stream 322 based on the life cycle of the corresponding consumer application. In response to detecting that the consumer application is deprecated, the event bus system 106 further removes or deprecates the consumer application data stream 322 as well.”); providing, using the event fanning platform, the requested network event to the consumer application via the consumer application data stream ([0095] If the event bus system 106 determines that the requested network event cannot be performed in batch mode (e.g., because the latency exceeds a batch mode threshold), the event bus system 106 performs an act 412 to determine a network event volume for the event request. In particular, the event bus system 106 determines (or receives an indication of) a volume or a number of network events (e.g., of the type indicated by the requested event) that the network event data streaming platform has available within the data lake 314 and/or within fanned data streams. Thus, event bus system 106 determines busy and/or available resources for provisioning new events if necessary. In some cases, the event bus system 106 determines a volume or a number of network events requested by the received self-service transaction request as part of the resource determination. Additionally, the event bus system 106 performs an act 414 to orchestrate creation of a Kinesis stream and a corresponding configuration for the stream using the event fanning platform 316 based on the volume(s). Indeed, as mentioned the event bus system 106 determines the event fanning configuration from the request as indicated by a requesting component/system (e.g., the consumer application server 320 or one of the third-party data servers 326).”; 0099, “[0099] In addition, the event data catalog 506 passes the information for the event(s) to a data stream orchestration engine 508 to determine a volume of network events streamed (or made available) by the network event data streaming platform (e.g., via low-latency fanned data streams). In response, the data stream orchestration engine 508 identifies the volume of network events within the network event data streaming platform that match the requested event (e.g., 3000 events per second). The data stream orchestration engine 508 passes the event volume information to the event transformation engine 504 to determine a number of server shards to use/dedicate for the network events of the self-service event request.; [0084] The consumer application server 320 thus executes a consumer application using data within the consumer application data stream 322. Consumer applications include applications for tracking device interactions, reporting on network stability/data loss, identifying login attempts, generating financial reports, executing asset transfers, checking account credit, or performing some other transaction. The event bus system 106 maintains the life cycle of the consumer application data stream 322 based on the life cycle of the corresponding consumer application. In response to detecting that the consumer application is deprecated, the event bus system 106 further removes or deprecates the consumer application data stream 322 as well.”); and in response to detecting a deprecation of the consumer application, deprecating the consumer application data stream within the event fanning platform ([0084] The consumer application server 320 thus executes a consumer application using data within the consumer application data stream 322. Consumer applications include applications for tracking device interactions, reporting on network stability/data loss, identifying login attempts, generating financial reports, executing asset transfers, checking account credit, or performing some other transaction. The event bus system 106 maintains the life cycle of the consumer application data stream 322 based on the life cycle of the corresponding consumer application. In response to detecting that the consumer application is deprecated, the event bus system 106 further removes or deprecates the consumer application data stream 322 as well.”). As to claim 2, THOMAS teaches receiving the event request indicating the requested network event by: receiving an application configuration file defining the consumer application; and detecting, within the application configuration file, a code segment defining the requested network event ([0078] To elaborate, based on receiving a network event request, the event bus system 106 determines or identifies an event fanning configuration (as defined by the request or a previous/initial request) that indicates a configuration for one or more requested network events. Specifically, an event fanning configuration indicates a destination data stream (and its streaming protocol or stream type, such as Kineses or Kafka) along with network events to provide to the destination data stream. In one or more embodiments, the event fanning platform 316 can update an event fanning configuration dynamically based on a new or updated event request, based on permissions associated with a requesting component/system, and/or according to throughput metrics and server capacity. In some cases, an event fanning configuration has the following format: TABLE-US-00002 [  {   stream: “arn:aws:kinesis:us-east-1:802476504392:stream/de-segmentatom-alerts-  login-prod”.    events: [    {     name: “chime.risk.v1.UserEnrollmenEvent”,     query: “SELECT * FROM chime.risk.v1.UserEnrollmenEvent     WHERE location IS ‘SF’ OR location IS ‘NYC’”    }    ]  }; [0079] Based on an event fanning configuration indicating one or more short-retention network events, the event fanning platform 316 generates a corresponding low-latency fanned data stream for the requested short-retention network events. The event fanning platform 316 further provides or broadcasts the low-latency fanned data stream to a requesting component, such as the consumer application server 320 or a third-party data server from among the third-party data servers 326. For instance, the event fanning platform 316 provides or broadcasts the fanned data stream to the consumer application data stream 322 on the consumer application server 320. Indeed, the consumer application server 320 generates and provides the event request including an event fanning configuration, whereupon the event fanning platform 316 fans out the relevant events to the appropriate consumer application data stream 322.). As to claim 3, THOMAS teaches generating the consumer application data stream by using the event fanning platform to generate a data stream configuration file defining the consumer application data stream to include the requested network event (par. 0017, “As just mentioned, the event bus system can utilize a network event data streaming platform for distributing network event data to requesting components or third-party systems. For example, the event bus system receives an event request from a network component or a third-party system and provides network event data based on the event request. Along these lines, rather than requiring developer curation of network event data to locate and collect data for a received event request, the event bus system can instead make network event data available in a self-service fashion. More specifically, the event bus system can utilize a network event data streaming platform that readily provides network events to requesting network components/systems from respective event platform sources where they are housed for distribution. Thus, in response to receiving a self-service event request, the event bus system can identify the event platform source for the requested event and can provide the requested event from the identified source. In some cases, the event bus system can further make network events discoverable throughout the network event data streaming platform (e.g., at their respective sources) such that requesting network components/systems can view or otherwise identify network events available for request.; par. 0076-0079, “As further illustrated in FIG. 3, the network event data streaming platform includes an event fanning platform 316. Indeed, as mentioned above, the event bus system 106 utilizes the event fanning platform 316 to generate low-latency fanned data streams to broadcast network events to requesting network components or third-party systems. For example, the event bus system 106 receives a network event request and determines a latency requirement for the request. Based on determining that the latency requirement is below a latency threshold, the event bus system 106 further determines that using the data lake 314 is not a viable option to provide the requested network event at the required speed (or in the required time) indicated by the latency of the request. Accordingly, the event bus system 106 utilizes the event fanning platform 316 to generate a fanned data stream for the requested event for access by the requesting component/system. [0077] In some embodiments, the event fanning platform 316 fans out network events to consumer application data streams (e.g., low-latency fanned data streams), such as the consumer application data stream 322 on the consumer application server 320. For instance, the event fanning platform 316 includes a processor that reads from a single data stream (e.g., the global event data stream 310) and writes to multiple streams based on a set of declarative configurations dictating what events need to be written to which consumer application data stream (or consumer application server). [0078] To elaborate, based on receiving a network event request, the event bus system 106 determines or identifies an event fanning configuration (as defined by the request or a previous/initial request) that indicates a configuration for one or more requested network events. Specifically, an event fanning configuration indicates a destination data stream (and its streaming protocol or stream type, such as Kineses or Kafka) along with network events to provide to the destination data stream. In one or more embodiments, the event fanning platform 316 can update an event fanning configuration dynamically based on a new or updated event request, based on permissions associated with a requesting component/system, and/or according to throughput metrics and server capacity…[0079] Based on an event fanning configuration indicating one or more short-retention network events, the event fanning platform 316 generates a corresponding low-latency fanned data stream for the requested short-retention network events. The event fanning platform 316 further provides or broadcasts the low-latency fanned data stream to a requesting component, such as the consumer application server 320 or a third-party data server from among the third-party data servers 326. For instance, the event fanning platform 316 provides or broadcasts the fanned data stream to the consumer application data stream 322 on the consumer application server 320. Indeed, the consumer application server 320 generates and provides the event request including an event fanning configuration, whereupon the event fanning platform 316 fans out the relevant events to the appropriate consumer application data stream 322.”); . As to claim 4, THOMAS teaches receiving, from an additional consumer application, an additional event request indicating the requested network event; and generating, using the event fanning platform, an additional consumer application data stream including the requested network event for the additional consumer application (par. 0017, “As just mentioned, the event bus system can utilize a network event data streaming platform for distributing network event data to requesting components or third-party systems. For example, the event bus system receives an event request from a network component or a third-party system and provides network event data based on the event request. Along these lines, rather than requiring developer curation of network event data to locate and collect data for a received event request, the event bus system can instead make network event data available in a self-service fashion. More specifically, the event bus system can utilize a network event data streaming platform that readily provides network events to requesting network components/systems from respective event platform sources where they are housed for distribution. Thus, in response to receiving a self-service event request, the event bus system can identify the event platform source for the requested event and can provide the requested event from the identified source. In some cases, the event bus system can further make network events discoverable throughout the network event data streaming platform (e.g., at their respective sources) such that requesting network components/systems can view or otherwise identify network events available for request.; par. 0076-0079, “As further illustrated in FIG. 3, the network event data streaming platform includes an event fanning platform 316. Indeed, as mentioned above, the event bus system 106 utilizes the event fanning platform 316 to generate low-latency fanned data streams to broadcast network events to requesting network components or third-party systems. For example, the event bus system 106 receives a network event request and determines a latency requirement for the request. Based on determining that the latency requirement is below a latency threshold, the event bus system 106 further determines that using the data lake 314 is not a viable option to provide the requested network event at the required speed (or in the required time) indicated by the latency of the request. Accordingly, the event bus system 106 utilizes the event fanning platform 316 to generate a fanned data stream for the requested event for access by the requesting component/system. [0077] In some embodiments, the event fanning platform 316 fans out network events to consumer application data streams (e.g., low-latency fanned data streams), such as the consumer application data stream 322 on the consumer application server 320. For instance, the event fanning platform 316 includes a processor that reads from a single data stream (e.g., the global event data stream 310) and writes to multiple streams based on a set of declarative configurations dictating what events need to be written to which consumer application data stream (or consumer application server). [0078] To elaborate, based on receiving a network event request, the event bus system 106 determines or identifies an event fanning configuration (as defined by the request or a previous/initial request) that indicates a configuration for one or more requested network events. Specifically, an event fanning configuration indicates a destination data stream (and its streaming protocol or stream type, such as Kineses or Kafka) along with network events to provide to the destination data stream. In one or more embodiments, the event fanning platform 316 can update an event fanning configuration dynamically based on a new or updated event request, based on permissions associated with a requesting component/system, and/or according to throughput metrics and server capacity…[0079] Based on an event fanning configuration indicating one or more short-retention network events, the event fanning platform 316 generates a corresponding low-latency fanned data stream for the requested short-retention network events. The event fanning platform 316 further provides or broadcasts the low-latency fanned data stream to a requesting component, such as the consumer application server 320 or a third-party data server from among the third-party data servers 326. For instance, the event fanning platform 316 provides or broadcasts the fanned data stream to the consumer application data stream 322 on the consumer application server 320. Indeed, the consumer application server 320 generates and provides the event request including an event fanning configuration, whereupon the event fanning platform 316 fans out the relevant events to the appropriate consumer application data stream 322.”); . As to claim 5, THOMAS teaches determining a network event environment for deploying the consumer application; and generating the consumer application data stream specific to the network event environment of the consumer application (par. 0017, “As just mentioned, the event bus system can utilize a network event data streaming platform for distributing network event data to requesting components or third-party systems. For example, the event bus system receives an event request from a network component or a third-party system and provides network event data based on the event request. Along these lines, rather than requiring developer curation of network event data to locate and collect data for a received event request, the event bus system can instead make network event data available in a self-service fashion. More specifically, the event bus system can utilize a network event data streaming platform that readily provides network events to requesting network components/systems from respective event platform sources where they are housed for distribution. Thus, in response to receiving a self-service event request, the event bus system can identify the event platform source for the requested event and can provide the requested event from the identified source. In some cases, the event bus system can further make network events discoverable throughout the network event data streaming platform (e.g., at their respective sources) such that requesting network components/systems can view or otherwise identify network events available for request.; par. 0076-0079, “As further illustrated in FIG. 3, the network event data streaming platform includes an event fanning platform 316. Indeed, as mentioned above, the event bus system 106 utilizes the event fanning platform 316 to generate low-latency fanned data streams to broadcast network events to requesting network components or third-party systems. For example, the event bus system 106 receives a network event request and determines a latency requirement for the request. Based on determining that the latency requirement is below a latency threshold, the event bus system 106 further determines that using the data lake 314 is not a viable option to provide the requested network event at the required speed (or in the required time) indicated by the latency of the request. Accordingly, the event bus system 106 utilizes the event fanning platform 316 to generate a fanned data stream for the requested event for access by the requesting component/system. [0077] In some embodiments, the event fanning platform 316 fans out network events to consumer application data streams (e.g., low-latency fanned data streams), such as the consumer application data stream 322 on the consumer application server 320. For instance, the event fanning platform 316 includes a processor that reads from a single data stream (e.g., the global event data stream 310) and writes to multiple streams based on a set of declarative configurations dictating what events need to be written to which consumer application data stream (or consumer application server). [0078] To elaborate, based on receiving a network event request, the event bus system 106 determines or identifies an event fanning configuration (as defined by the request or a previous/initial request) that indicates a configuration for one or more requested network events. Specifically, an event fanning configuration indicates a destination data stream (and its streaming protocol or stream type, such as Kineses or Kafka) along with network events to provide to the destination data stream. In one or more embodiments, the event fanning platform 316 can update an event fanning configuration dynamically based on a new or updated event request, based on permissions associated with a requesting component/system, and/or according to throughput metrics and server capacity…[0079] Based on an event fanning configuration indicating one or more short-retention network events, the event fanning platform 316 generates a corresponding low-latency fanned data stream for the requested short-retention network events. The event fanning platform 316 further provides or broadcasts the low-latency fanned data stream to a requesting component, such as the consumer application server 320 or a third-party data server from among the third-party data servers 326. For instance, the event fanning platform 316 provides or broadcasts the fanned data stream to the consumer application data stream 322 on the consumer application server 320. Indeed, the consumer application server 320 generates and provides the event request including an event fanning configuration, whereupon the event fanning platform 316 fans out the relevant events to the appropriate consumer application data stream 322.”). As to claim 6, THOMAS teaches detecting a deprecation of the consumer application by detecting a user interaction deleting the consumer application from the network event data streaming platform; and deprecating the consumer application data stream by automatically unbinding server resources allocated to the consumer application data stream in response to detecting the deprecation of the consumer application ([0084] The consumer application server 320 thus executes a consumer application using data within the consumer application data stream 322. Consumer applications include applications for tracking device interactions, reporting on network stability/data loss, identifying login attempts, generating financial reports, executing asset transfers, checking account credit, or performing some other transaction. The event bus system 106 maintains the life cycle of the consumer application data stream 322 based on the life cycle of the corresponding consumer application. In response to detecting that the consumer application is deprecated, the event bus system 106 further removes or deprecates the consumer application data stream 322 as well.”). As to claim 7, THOMAS teaches detecting a modification to the event request indicating an additional requested network event from among the plurality of network events hosted by the network event data streaming platform; and based on the modification to the event request, generating an updated consumer application data stream using the event fanning platform indicating the requested network event and the additional requested network event ([0116] Additionally, the series of acts 600 can include an act of generating a consumer application data stream for the self-service event request by: in response to receiving the self-service event request, generating an event fanning configuration indicating one or more short-retention network events to provide to a consumer application server associated with the consumer application data stream and generating a low-latency fanned data stream broadcasting the one or more short-retention network events to the consumer application server according to the event fanning configuration. The series of acts 600 can also include an act of generating the plurality of network events for the global data stream by: receiving network events indicating modifications to network data associated with an inter-network facilitation system from one or more event logging servers and generating schematized versions of the network events from the one or more event logging servers. The series of acts 600 can also include an act of broadcasting the schematized versions of the network events via the global data stream.). As to claims 8-14, reference is made to a system that corresponds to the method of claims 1-7 and is therefore met by the rejection of claims 1-7 above. As to claims 15-20, reference is made to a computer program product that corresponds to the method of claims 1-6 and is therefore met by the rejection of claims 1-6 above. Claim(s) 1-5, 7-12 and 14-19 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by KOLODZIESKI (2018/0165139). As to claim 1, KOLODZIESKI teaches a method comprising: receiving, from a consumer application (event subscriber), an event request indicating a requested network event from among a plurality of network events hosted by a network event data streaming platform ([0070] Pub/sub is a message-oriented interaction paradigm based on indirect addressing. Subscribers (e.g., cluster manager device 104, ESP cluster device 1000, event subscribing device 500) specify their interest in receiving information from ESPE 400 by subscribing to specific classes of events, while information sources (event publishing device 200, cluster manager device 104, ESP cluster device 1000) publish events to ESPE 400 without directly addressing the data recipients. Stream processing system 100 includes ESPE manager 400m that receives events from event publishing application 222 executing on event publishing device 200 of event publishing system 102 and that publishes processed events to ESPE A 400a of ESP cluster device 1000 of ESP cluster system 106. ESPE A 400a of ESP cluster device 1000 of ESP cluster system 106 receives events from ESPE manager 400m and publishes further processed events to event subscribing application 522 of event subscribing device 500 of event subscribing system 108. [0071] In an operation 306, a connection is made between event publishing application 222 and ESPE 400, such as ESPE manager 400m executing on cluster manager device 104, for each source window of the source windows 406 to which any measurement data value is published. To make the connection, the pointer to the created publishing client may be passed to a “Connect” function. If event publishing application 222 is publishing to more than one source window of ESPE 400, a connection may be made to each started window using the pointer returned for the respective “Start” function call.); determining a request volume within the network event data streaming platform ([0073] In an operation 310, the created event block object is published to ESPE 400, for example, using the pointer returned for the respective “Start” function call to the appropriate source window. Event publishing application 222 passes the created event block object to the created publishing client, where the unique ID field in the event block object has been set by event publishing application 222 possibly after being requested from the created publishing client. In an illustrative embodiment, event publishing application 222 may wait to begin publishing until a “Ready” callback has been received from the created publishing client. The event block object is injected into the source window, continuous query, and project associated with the started publishing client. [0074] In an operation 312, a determination is made concerning whether or not processing is stopped. If processing is not stopped, processing continues in operation 308 to continue creating and publishing event block objects that include measurement data values. If processing is stopped, processing continues in an operation 314. [0075] In operation 314, the connection made between event publishing application 222 and ESPE 400 through the created publishing client is disconnected, and each started publishing client is stopped… [0080] In an operation 602, subscription services are initialized. [0081] In an operation 604, the initialized subscription services are started, which may create a subscribing client on behalf of event subscribing application 512 at event subscribing device 500. The subscribing client performs the various pub/sub activities for event subscribing application 512. For example, a URL to ESPE 400, such as ESPE A 400a of ESP cluster device 1000 of ESP cluster system 106, may be passed to a “Start” function. The “Start” function may validate and retain the connection parameters for a specific subscribing client connection and return a pointer to the subscribing client. For illustration, the URL may be formatted as “dfESP://<host>:<port>/<project name>/<continuous query name>/<window name>”. [0082] In an operation 606, a connection may be made between event subscribing application 512 executing on event subscribing device 500 and ESPE A 400a through the created subscribing client. To make the connection, the pointer to the created subscribing client may be passed to a “Connect” function and a mostly non-busy wait loop created to wait for receipt of event block objects. For example, the connection may be made to one or more computing devices of ESP cluster system 106. [0083] In an operation 608, an event block object is received by event subscribing application 512 executing on event subscribing device 500. [0084] In an operation 610, the received event block object is processed based on the operational functionality provided by event subscribing application 512. For example, event subscribing application 512 may extract data from the received event block object and store the extracted data in a database. In addition, or in the alternative, event subscribing application 512 may extract data from the received event block object and send the extracted data to a system control operator display system, an automatic control system, a notification device, an analytic device, etc. In addition, or in the alternative, event subscribing application 512 may extract data from the received event block object and send the extracted data to a post-incident analysis device to further analyze the data. Event subscribing application 512 may perform any number of different types of actions as a result of extracting data from the received event block object. The action may involve presenting information on a second display 516 or a second printer 520, presenting information using a second speaker 518, storing data in second computer-readable medium 522, sending information to another device using second communication interface 506, etc. A user may further interact with presented information using a second mouse 514 and/or a second keyboard 512. [0085] In an operation 612, a determination is made concerning whether or not processing is stopped. If processing is not stopped, processing continues in operation 608 to continue receiving and processing event block objects. If processing is stopped, processing continues in an operation 614. [0133] Manager application 712 may provide the REST API layer for a user to query for information described in manager configuration file 714, remote ESP model 716, manager ESP model 718, and router configuration file 720 and to query a status of ESPE manager 400m and/or of ESPE A 400a. For example, using the REST API, the user can create, delete, modify, and/or retrieve information related to the one or more projects 402, the one or more continuous queries 404, the one or more source windows 406, and/or the one or more derived windows 408 of ESPE manager 400m and/or of ESPE A 400a. The user can further start and stop a project of the one or more projects 402. The user still further may inject events into and retrieve events from ESPE manager 400m and/or of ESPE A 400a. [0134] Manager application 712 provides a mapping of sources from edge devices (event publishing system 102) to ESPE A 400a of ESP cluster system 106 that may include cloud devices. By managing a mapping between connectors and ESPE A 400a, manager application 712 facilitates an elastic deployment of ESP in the cloud and makes large scale deployment easier. For example, manager application 712 supports deployment of SAS® Event Stream Processing as a service to a cloud platform that creates and manages hardware resources in the cloud. [0135] ESPE A 400a may be provisioned on virtual machines of ESP cluster system 106. ESPE A 400a may each run remote engine 722a with their administrative and pub/sub ports open (also referred to as factory servers), for example, using a command such as “$DFESP_HOME/bin/dfesp_xml_server-pubsub 5575-http-pubsub 5577-http-admin 5576”. ESPE A 400a can receive and respond to HTTP requests from ESPE manager 400m using the port number port specified for the “-http-admin” input parameter. A port for pub/sub commands to an HTTP server executing on ESP cluster system 106 is defined using the port number port specified for the “-http-pubsub” input parameter. In alternative embodiments, the port for admin commands and the port for pub/sub commands may use the same port. The “-http-admin” input parameter and the “-http-pubsub” input parameter are associated with HTTP server elements <http-servers>. A port for pub/sub commands to ESPE A 400a is defined using the port number port specified for the “-pubsub” input parameter. In alternative embodiments, the command line parameters may be defined by default, input by a user through a user interface, etc. [0136] After provisioning ESPE A 400a as factory servers, manager application 712 can be controlled to: [0137] deploy projects to ESPE A 400a through an administrative REST API to the HTTP server; [0138] start one or more data sources of event publishing system 102 in an orchestrated fashion; [0139] stream events for processing and analyzing through the pub/sub API of ESPE manager 400m; and [0140] dynamically add or remove ESPE A 400a of ESP cluster system 106. [0141] Referring to FIG. 8, example operations associated with manager application 712 are described. Manager application 712 defines how incoming event streams from event publishing system 102 are transformed into meaningful outgoing event streams consumed by ESP cluster system 106 and ultimately event subscribing system 108. Additional, fewer, or different operations may be performed depending on the embodiment. The order of presentation of the operations of FIG. 8 is not intended to be limiting [0165] The “esp-maps” element of the “esp-cluster-manager” element defines how event publishing sources defined by the <raw-sources> element, such as event publishing device 200 of event publishing system 102, are mapped to the one or more source windows 406 of a project of the one or more projects 402 of ESPE A 400a. The “esp-maps” element may be defined in manager configuration file 714 based on: TABLE-US-00026 element esp-maps {  element esp-map {  attribute name { name_t },  attribute cluster-ref { name_t },  attribute model-ref { name_t },  element map { ... }+  element orchestration { ... }?  }+ } [0166] The “name” attribute specifies a name of the ESP cluster map. The “cluster-ref” attribute specifies a name of the ESP cluster that matches a “name” attribute field specified for an “esp-cluster” element. The “model-ref” attribute specifies a name of the ESP project that matches a “name” attribute field specified for a “project” element. The “map” element maps the source to the ESPE source window of ESPE A 400a. The “orchestration” element defines an order for starting connectors between data sources and ESPE manager 400m. [0167] The “map” element of the “esp-map” element may be defined in manager configuration file 714 based on: TABLE-US-00027 element map {  attribute name { name_t },  element from { attribute source { name_t } },  element multicast-destination { ... }*,  element roundrobin-destination { ... }*,  element hash-destination { ... }*, }+ [0168] The “name” attribute specifies a name of the map. The “from” element specifies a name of the data source that matches a “name” attribute field specified for a “raw-source” element. One of “multicast-destination”, “roundrobin-destination”, or “hash-destination” is used to define how a specific ESPE A 400a of ESP cluster system 106 is selected as a recipient of an event block object from the data source. Selection of “multicast-destination” indicates that the event is sent to each ESPE A 400a. For illustration, the “multicast-destination” element of the “map” element may be defined based on: TABLE-US-00028 element multicast-destination {  attribute name { name_t },  attribute opcode { ‘insert’ | ‘upsert’ | ‘update’ | ‘delete’ }?,  element publish-target {  element project-func { xsd:string [code] },  element contquery-func { xsd:string [code] },  element window-func { xsd:string [code] }  } } [0175] The “orchestration” element of the “esp-map” element defines an order in which connectors between event publishing sources defined by the <raw-sources> element and ESPE manager 400m are started. To stream data into ESPE manager 400m, a connector is used. Connectors use the pub/sub API to interface with a variety of communication fabrics, drivers, and clients. Connectors are C++ classes that are instantiated in the same process space as ESPE manager 400m. By default, connectors may be started automatically when a project of the one or more projects 402 of ESPE manager 400m is started so that the connectors and project run concurrently.). [0228] In an operation 1114, a connection request is received from ESPE manager 400m executing on cluster manager device 104 for a source window to which data will be published. A connection request further is received from a computing device of event subscribing system 108, for example, from event subscribing device 500. [0229] In an operation 1116, an event block object is received from ESPE manager 400m. An event block object containing one or more event objects is injected into a source window of the one or more source windows 406 defined from remote ESP model A 716a. [0230] In an operatio
Read full office action

Prosecution Timeline

Jan 11, 2023
Application Filed
Jun 05, 2025
Non-Final Rejection — §102, §103
Aug 01, 2025
Interview Requested
Aug 07, 2025
Examiner Interview Summary
Aug 07, 2025
Applicant Interview (Telephonic)
Aug 27, 2025
Response Filed
Nov 21, 2025
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566629
SERVER AND A RESOURCE SCHEDULING METHOD FOR USE IN A SERVER
2y 5m to grant Granted Mar 03, 2026
Patent 12561185
FLEXIBLE APPLICATION PROGRAMING INTERFACE USING VERSIONING REQUEST AND RESPONSE TRANSFORMERS
2y 5m to grant Granted Feb 24, 2026
Patent 12511148
SYSTEM AND METHOD SUPPORTING HIGHLY-AVAILABLE REPLICATED COMPUTING APPLICATIONS USING DETERMINISTIC VIRTUAL MACHINES
2y 5m to grant Granted Dec 30, 2025
Patent 12493543
DYNAMIC INSTRUMENTATION TO CAPTURE CLEARTEXT FROM TRANSFORMED COMMUNICATIONS
2y 5m to grant Granted Dec 09, 2025
Patent 11487562
ROLLING RESOURCE CREDITS FOR SCHEDULING OF VIRTUAL COMPUTER RESOURCES
2y 5m to grant Granted Nov 01, 2022
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
23%
Grant Probability
79%
With Interview (+56.0%)
3y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 65 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month