Prosecution Insights
Last updated: April 19, 2026
Application No. 18/348,970

TECHNIQUES FOR MITIGATING BACK PRESSURE, AUTO-SCALING THROUGHPUT, AND CONCURRENCY SCALING IN LARGE-SCALE AUTOMATED EVENT-DRIVEN DATA PIPELINES

Non-Final OA §103
Filed
Jul 07, 2023
Examiner
ANYA, CHARLES E
Art Unit
2194
Tech Center
2100 — Computer Architecture & Software
Assignee
Equifax Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
727 granted / 891 resolved
+26.6% vs TC avg
Strong +34% interview lift
Without
With
+33.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
41 currently pending
Career history
932
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
61.1%
+21.1% vs TC avg
§102
6.8%
-33.2% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 891 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-20 are pending in this application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 1, 5, 8, 12, 15 and 19 are objected to because of the following informalities: Claims 1, 5, 8, 12, 15 and 19 include an abbreviation “ID”. Abbreviations are allowed in claims however its first occurrence must be spelled out. For instance, “Identifier(ID)”. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 8-12, and 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 20210182267 A1 to Andreakis et al. in view of U.S. Pub. No. 20200034076 A1 to Zentz et al. As to claim 1, Andreakis teaches a system comprising: a processor (CPU 604); and a non-transitory computer-readable medium (System Disk 606) comprising instructions that are executable by the processor to cause the processor to: receive, from a data warehouse (Source Datastore(s) 120/Source Database(s) 121), a data stream (Log Event(s) 128) at a storage service (Sink datastore(s) 160/Derived Datastore(s) 161) (“…Change log 127 includes any technically feasible history of actions executed or performed on source datastore(s) 120 or the like. In some embodiments, change log 127 includes a log of one or more log event(s) 128 reflecting one or more changes in source datastore(s) 120 (e.g., changed rows, changed column schema, or the like). In some embodiments, the change log 127 includes a linear history of committed changes and non-stale reads associated with source datastore(s) 120…In some embodiments, log event(s) 128 include any event emitted by source datastore(s) 120 based on changes to rows, columns, or the like. In some embodiments, the log event(s) 128 may be emitted in real-time. In some embodiments, log event(s) 128 are of one or more types including create, update, delete, or the like. In some embodiments, log event(s) 128 are associated with one or more attributes including a log sequence number (LSN), the column values at the time of the operation (e.g., column title at a given point in time or the like), the schema that applied at the time of the operation, the full pre and post image of the row from the time of the change, a link to the last log record, type of database log record, information associated with the change that triggered the log record to be written, or the like…Derived datastore(s) 161 include any technically feasible storage infrastructure storing data derived from source datastore(s) 120. In some embodiments, derived datastore(s) 161 include one or more heterogeneous datastores, one or more relational databases, one or more file systems, one or more distributed datastores, one or more directory services, one or more active databases, one or more cloud databases, one or more data warehouses, one or more distributed databases, one or more embedded database systems, one or more document-oriented databases, one or more federated database systems, one or more array database management systems, one or more real-time databases, one or more temporal databases, one or more logic databases, or the like…” paragraphs 0048/0049/0051); receive, at a first function of a first computing service, one or more notifications (Change Log 127/Log Events 128) (“…Output writer 144 includes any technically feasible write configured to collect one or more events and write to an output such as sink datastore(s) 160 or the like. In some embodiments, log and dump events are sent to the output using a non-blocking operation or the like. In some embodiments, output writer 144 runs its own thread and collects one or more events in an output buffer prior to writing the events to an output in order. In some embodiments, output writer includes retrieved schema 153 or an associated schema identifier in the output…In some embodiments, the output buffer is stored in memory. In some embodiments, output writer 144 includes an interface configured to plugin to any output such as sink datastore(s) 160…In some embodiments, output writer 144 includes an event serializer configured to serialize events into a customized format prior to appending the events to an output buffer, writing events to an output, or the like. In some embodiments, output writer 144 includes an interface to allow plugin of a custom formatter for serializing events in a customized format. In some events, output writer 144 appends events to an output buffer using one thread, and uses another thread to consume the events from the output buffer and send them to the output…” paragraphs 0061-0063), wherein each notification is generated when a file containing data from the data stream is created in the storage service (Change Log 127/Log Events 128) (“…Change log 127 includes any technically feasible history of actions executed or performed on source datastore(s) 120 or the like. In some embodiments, change log 127 includes a log of one or more log event(s) 128 reflecting one or more changes in source datastore(s) 120 (e.g., changed rows, changed column schema, or the like). In some embodiments, the change log 127 includes a linear history of committed changes and non-stale reads associated with source datastore(s) 120…In some embodiments, log event(s) 128 include any event emitted by source datastore(s) 120 based on changes to rows, columns, or the like. In some embodiments, the log event(s) 128 may be emitted in real-time. In some embodiments, log event(s) 128 are of one or more types including create, update, delete, or the like. In some embodiments, log event(s) 128 are associated with one or more attributes including a log sequence number (LSN), the column values at the time of the operation (e.g., column title at a given point in time or the like), the schema that applied at the time of the operation, the full pre and post image of the row from the time of the change, a link to the last log record, type of database log record, information associated with the change that triggered the log record to be written, or the like…” paragraphs 0048/0049); in response to each notification, pass, by the first function, a message to a queue (output buffer/non-blocking) (“…Output writer 144 includes any technically feasible write configured to collect one or more events and write to an output such as sink datastore(s) 160 or the like. In some embodiments, log and dump events are sent to the output using a non-blocking operation or the like. In some embodiments, output writer 144 runs its own thread and collects one or more events in an output buffer prior to writing the events to an output in order. In some embodiments, output writer includes retrieved schema 153 or an associated schema identifier in the output…In some embodiments, output writer 144 includes an event serializer configured to serialize events into a customized format prior to appending the events to an output buffer, writing events to an output, or the like. In some embodiments, output writer 144 includes an interface to allow plugin of a custom formatter for serializing events in a customized format. In some events, output writer 144 appends events to an output buffer using one thread, and uses another thread to consume the events from the output buffer and send them to the output…” paragraphs 0061/0063); receive, at an invocation of a second function at a second computing service, one or more messages from the queue (“…Output writer 144 includes any technically feasible write configured to collect one or more events and write to an output such as sink datastore(s) 160 or the like. In some embodiments, log and dump events are sent to the output using a non-blocking operation or the like. In some embodiments, output writer 144 runs its own thread and collects one or more events in an output buffer prior to writing the events to an output in order. In some embodiments, output writer includes retrieved schema 153 or an associated schema identifier in the output…In some embodiments, output writer 144 includes an event serializer configured to serialize events into a customized format prior to appending the events to an output buffer, writing events to an output, or the like. In some embodiments, output writer 144 includes an interface to allow plugin of a custom formatter for serializing events in a customized format. In some events, output writer 144 appends events to an output buffer using one thread, and uses another thread to consume the events from the output buffer and send them to the output…” paragraph 0061/0063); retrieve, by the second function, data from one or more files based on the address in each of the one or more messages (Output Writer 144); and write, by the second function, the data to a database (Sink datastore(s) 160/output) (“…Output writer 144 includes any technically feasible write configured to collect one or more events and write to an output such as sink datastore(s) 160 or the like. In some embodiments, log and dump events are sent to the output using a non-blocking operation or the like. In some embodiments, output writer 144 runs its own thread and collects one or more events in an output buffer prior to writing the events to an output in order. In some embodiments, output writer includes retrieved schema 153 or an associated schema identifier in the output…” paragraph 0061). Andreakis is silent with reference to the message comprising an address of a respective file within the storage service and a message group ID selected from a range and wherein the invocation of the second function to which each message is routed is based on the message group ID. Zentz teaches the message (write commands) comprising an address (allocated block) of a respective file within the storage service (SSDs) (“…An embodiment of a data storage system may operate in accordance with one or more storage protocols or standards, such as the SCSI standard or protocol or the NVMe (Non-Volatile Memory Express) standard or protocol. Write Streams are features included in standards, such as the NVMe and SCSI standards, for use with SSDs, such as those providing non-volatile backend physical storage on the data storage system. Write Streams generally allow write commands to an SSD to be tagged with an identifier which is used to optimize data placement of the write data on the storage media of the SSD. The identifier may be assigned to a write stream where a group of related data segments have the same identifier. Related data having the same identifier may be grouped together so that when writing to SSD physical media, processing by the SSD places related data (having the same identifier) together, such as in the same allocated block, so that such data may also be erased together as a group. Thus, data associated with the same Write Stream is expected to be invalidated (e.g., via a data operation such as update/write, deallocation, etc.) at the same time. Use of Write Streams allows for SSD block allocation where related data having a similar expected data lifetime may be placed in the same erase block thereby reducing write amplification (e.g., such as due to rewriting data and garbage collection when performing space reclamation). Write Streams are intended to improve the performance and endurance of an SSD over its lifetime…” paragraph 0039) and a message group ID selected from a range (“…In step 808, processing is performed to randomly assign each extent ID of the extent ID range to a RAID group ID uniquely associated with one of the RAID groups in the system…” paragraph 0073) and wherein the invocation of the second function to which each message is routed is based on the message group ID (Write Streams generally allow write commands to an SSD to be tagged with an identifier which is used to optimize data placement of the write data on the storage media of the SSD) (“…An embodiment of a data storage system may operate in accordance with one or more storage protocols or standards, such as the SCSI standard or protocol or the NVMe (Non-Volatile Memory Express) standard or protocol. Write Streams are features included in standards, such as the NVMe and SCSI standards, for use with SSDs, such as those providing non-volatile backend physical storage on the data storage system. Write Streams generally allow write commands to an SSD to be tagged with an identifier which is used to optimize data placement of the write data on the storage media of the SSD. The identifier may be assigned to a write stream where a group of related data segments have the same identifier. Related data having the same identifier may be grouped together so that when writing to SSD physical media, processing by the SSD places related data (having the same identifier) together, such as in the same allocated block, so that such data may also be erased together as a group. Thus, data associated with the same Write Stream is expected to be invalidated (e.g., via a data operation such as update/write, deallocation, etc.) at the same time. Use of Write Streams allows for SSD block allocation where related data having a similar expected data lifetime may be placed in the same erase block thereby reducing write amplification (e.g., such as due to rewriting data and garbage collection when performing space reclamation). Write Streams are intended to improve the performance and endurance of an SSD over its lifetime…” paragraph 0039) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Andreakis with the teaching of Zentz because the teaching of Zentz would improve the system of Andreakis by using Write Streams that allows write commands to an SSD to be tagged with an identifier which is used to optimize data placement of the write data on the storage media of the SSD (Zentz paragraph 0039). As to claim 2, Andreakis teaches the system of claim 1, wherein the storage service comprises an event-driven object storage system (Sink datastore(s) 160) (“…Sink datastore(s) 160 include any technically feasible storage infrastructure for storing and managing collections of data. In some embodiments, sink datastore(s) 160 include one or more heterogeneous datastores, one or more relational databases, one or more file systems, one or more distributed datastores, one or more directory services, one or more active databases, one or more cloud databases, one or more data warehouses, one or more distributed databases, one or more embedded database systems, one or more document-oriented databases, one or more federated database systems, one or more array database management systems, one or more real-time databases, one or more temporal databases, one or more logic databases, or the like. In some embodiments, sink datastore(s) 160 operate on a plurality of servers, a plurality of storage devices, or the like. The server may be a standalone server, a cluster or “farm” of servers, one or more network appliances, or the like. In some embodiments, sink datastore(s) 160 include data managed by one or more teams, one or more business entities, or the like. In some embodiments, sink datastore(s) 160 include any downstream application configured to propagate received events (e.g., stream processing application, data analytics platform), search index, cache, or the like. Sink datastore(s) 160 includes, without limitation, derived datastore(s) 161, stream(s) 162 and application programming interfaces (API(s)) 163…” paragraph 0050). As to claim 3, Andreakis teaches the system of claim 2, wherein the first function is triggered to pass messages to the queue based on event data generated by the storage service, the event data comprising the notification (output buffer/non-blocking) (“…Output writer 144 includes any technically feasible write configured to collect one or more events and write to an output such as sink datastore(s) 160 or the like. In some embodiments, log and dump events are sent to the output using a non-blocking operation or the like. In some embodiments, output writer 144 runs its own thread and collects one or more events in an output buffer prior to writing the events to an output in order. In some embodiments, output writer includes retrieved schema 153 or an associated schema identifier in the output…In some embodiments, output writer 144 includes an event serializer configured to serialize events into a customized format prior to appending the events to an output buffer, writing events to an output, or the like. In some embodiments, output writer 144 includes an interface to allow plugin of a custom formatter for serializing events in a customized format. In some events, output writer 144 appends events to an output buffer using one thread, and uses another thread to consume the events from the output buffer and send them to the output…” paragraphs 0061/0063). As to claim 4, Andreakis teaches the system of claim 1, wherein the storage service, and the database are hosted in a cloud-based system (Sink datastore(s) 160) (“…Sink datastore(s) 160 include any technically feasible storage infrastructure for storing and managing collections of data. In some embodiments, sink datastore(s) 160 include one or more heterogeneous datastores, one or more relational databases, one or more file systems, one or more distributed datastores, one or more directory services, one or more active databases, one or more cloud databases, one or more data warehouses, one or more distributed databases, one or more embedded database systems, one or more document-oriented databases, one or more federated database systems, one or more array database management systems, one or more real-time databases, one or more temporal databases, one or more logic databases, or the like. In some embodiments, sink datastore(s) 160 operate on a plurality of servers, a plurality of storage devices, or the like. The server may be a standalone server, a cluster or “farm” of servers, one or more network appliances, or the like. In some embodiments, sink datastore(s) 160 include data managed by one or more teams, one or more business entities, or the like. In some embodiments, sink datastore(s) 160 include any downstream application configured to propagate received events (e.g., stream processing application, data analytics platform), search index, cache, or the like. Sink datastore(s) 160 includes, without limitation, derived datastore(s) 161, stream(s) 162 and application programming interfaces (API(s)) 163…” paragraph 0050). . As to claim 5, Zentz teaches the system of claim 1, however it is silent with reference to wherein the message group ID is randomly selected from the range (“…In step 808, processing is performed to randomly assign each extent ID of the extent ID range to a RAID group ID uniquely associated with one of the RAID groups in the system…” paragraph 0073). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Andreakis with the teaching of Zentz because the teaching of Zentz would improve the system of Andreakis by using Write Streams that allows write commands to an SSD to be tagged with an identifier which is used to optimize data placement of the write data on the storage media of the SSD (Zentz paragraph 0039). As to claims 8 and 15, see the rejection of claim 1 above. As to claims 9 and 16, see the rejection of claim 2 above. As to claims 10 and 17, see the rejection of claim 3 above. As to claims 11 and 18, see the rejection of claim 4 above. As to claims 12 and 19, see the rejection of claim 5 above. Claims 6, 13 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2021/0182267 A1 to Andreakis et al. in view of U.S. Pub. No. 2020/0034076 A1 to Zentz et al. as applied to claims 1, 8 and 15 above, and further in view of U.S. Pub. No. 2017/0161288 A1 to Feldman et al. As to claim 6, Andreakis as modified by Zentz teaches the system of claim 1, however it is silent with reference to wherein the queue is a first-in-first-out (FIFO) queue. Feldman teaches wherein the queue is a first-in-first-out (FIFO) queue (FIFO memory) (“…As illustrated by FIG. 1, data storage device 101 comprises data storage 104 (e.g., a block device, a file system, a network block device, a virtual block device, etc.), file event notification component 120, and analytics engine component 130. In this regard, file event notification component 120 (e.g., inotify, a filter driver, etc.) can detect accesses, e.g., activity 102, of respective files of data storage 104. For example, in embodiment(s), activity 102 can comprise processes related to creating a file in data storage 104, modifying the file, reading the file, deleting the file, opening the file, closing the file, etc. In other embodiment(s), file event notification component 120 can detect properties, e.g., an I/O latency of an access of the accesses, a data throughput associated with the access, etc. [0037] Further, in response to detecting activity 102, file event notification component 120 can generate message file-system event representing an access corresponding to activity 102, and send the file-system event to analytics engine component 130. In turn, analytics engine component 130 can receive the file-system event from file event notification component 120, generate a subscriber event message representing the file-system event, and store the subscriber event message in a queue (see e.g., event queue 222 below), a FIFO memory, etc….” paragraphs 0036/0037). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Andreakis and Zentz with the teaching of Feldman because the teaching of Feldman would improve the system of Andreakis and Zentz by providing a data structure (often, specifically a data buffer) where the oldest (first) entry, or "head" of the queue, is processed first. As to claims 13 and 20, see the rejection of claim 6 above. Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 20210182267 A1 to Andreakis et al. in view of U.S. Pub. No. 2020/0034076 A1 to Zentz et al. as applied to claims 1 and 8 above, and further in view of U.S. Pub. No. 2018/0067864 A1 to Park et al. As to claim 7, Andreakis as modified by Zentz teaches the system of claim 1, however it is silent with reference to wherein a desired throughput can be adjusted by controlling a number of routines of each invocation of the second function that can simultaneously write to the database. Park teaches wherein a desired throughput can be adjusted (Step 405) by controlling a number of routines of each invocation of the second function that can simultaneously write to the database (remote storage unit/cloud storage) (‘…As described above, each writer may generate an individual event stream. In one embodiment, the messages from the writers (i.e., individual event stream) may be stored in a remote storage unit, for example a cloud storage. The remote storage unit may also store a history of readers' subscription list of writers. Therefore, a replay of a reader's past operation (i.e., what messages the reader saw) may be reconstructed. In one embodiment, the remote storage unit may be integrated with the repeater block components…FIG. 4A illustrates a flow diagram of receiving messages in accordance with an embodiment of the present invention. In one embodiment, the method of FIG. 4A may be performed by one of the readers 140.1-140.P of FIG. 1, or alternatively, it may be performed by another device. In step 405, the reader may set a list of writers to which it will listen. The writers on the list may be distributed across different locations in the distributed computing network. In step 410, the reader may receive messages from the writers on its list. In addition, the reader may also receive heartbeats from the writers on its list…” paragraphs 0028/0041). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Andreakis and Zentz with the teaching of Park because the teaching of Park would improve the system of Andreakis and Zentz by providing a technique for setting a list of writers for writing data messages to remote distributed storages and as such allowing for concurrency writing of data messages. As to claim 14, see the rejection of claim 7 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. Pub. No. 2023/0179559 A1 to Ballantyne et all. techniques are provided for distributing event messages from a first service to additional services using a message store. U.S. Pub. No. 2024/0378087 A1 to Talwalkar et al. and directed to a system and method for on-demand event-driven data transfer in which a serverless listener detects a transfer request from a location remote from the data transfer system. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES E ANYA whose telephone number is (571)272-3757. The examiner can normally be reached Mon-Fir. 9-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KEVIN YOUNG can be reached at 571-270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHARLES E ANYA/Primary Examiner, Art Unit 2194
Read full office action

Prosecution Timeline

Jul 07, 2023
Application Filed
Feb 17, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591471
KNOWLEDGE GRAPH REPRESENTATION OF CHANGES BETWEEN DIFFERENT VERSIONS OF APPLICATION PROGRAMMING INTERFACES
2y 5m to grant Granted Mar 31, 2026
Patent 12591455
PARAMETER-BASED ADAPTIVE SCHEDULING OF JOBS
2y 5m to grant Granted Mar 31, 2026
Patent 12585510
METHOD AND SYSTEM FOR AUTOMATED EVENT MANAGEMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12579014
METHOD AND A SYSTEM FOR PROCESSING USER EVENTS
2y 5m to grant Granted Mar 17, 2026
Patent 12572393
CONTAINER CROSS-CLUSTER CAPACITY SCALING
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+33.5%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 891 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month