Prosecution Insights
Last updated: April 19, 2026
Application No. 17/713,253

Digital simulator of data communication apparatus

Non-Final OA §103
Filed
Apr 05, 2022
Examiner
SHALU, ZELALEM W
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
Mellanox Technologies Ltd.
OA Round
3 (Non-Final)
29%
Grant Probability
At Risk
3-4
OA Rounds
3y 2m
To Grant
48%
With Interview

Examiner Intelligence

Grants only 29% of cases
29%
Career Allow Rate
31 granted / 108 resolved
-26.3% vs TC avg
Strong +19% interview lift
Without
With
+19.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
34 currently pending
Career history
142
Total Applications
across all art units

Statute-Specific Performance

§101
14.3%
-25.7% vs TC avg
§103
63.4%
+23.4% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
10.8%
-29.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 108 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to the amendment filed on 11/02/2025. Claims 23-42 are pending in the case. Applicant Response 3. In Applicant’s response dated 11/02/2025, Applicant canceled claims 1-22 added new claims 23 to 42 and argued against all objections and rejections previously set forth in the Office Action dated 09/03/2025. Continued Examination under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/02/2025 has been entered. Examiner Comments 4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 103 6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 23-42 are rejected under 35 U.S.C. 103 as being unpatentable over Kachare (Pub. No.: US 20180307650 A1, Pub. Date: 2018-10-25) in view of Ismailsheriff (US 20210092068 A1, 2021-03-25) Regarding independent Claim 23, Kachare teaches a data communication apparatus (see Kachare: Fig.2, [0036], “an LL-DAX storage and data access system according to one embodiment of the present disclosure includes a host device 100 and an LL-DAX eSSD 101 (i.e., a series of NVMe SSD devices connected over Ethernet).”), comprising: a network interface to receive content transfer requests from at least one remote device over a packet data network (see Kachare: Fig.2, [0049], “at operation 201, the eSSD receives an RDMA READ request or an RDMA WRITE request from the LL-DAX block storage software layer 103 (see FIG. 2) at the host device.”); at least one peripheral interface to connect to local peripheral storage devices (see Kachare: Fig.2, [0036], “The LL-DAX eSSD is a standard NVMe-oF eSSD, with additional LL-DAX feature support. The host device 100 includes an application 102, LL-DAX block storage software 103, and an RDMA transport layer 104.,”); a storage sub-system comprising a cache and a random-access memory (RAM) (see Kachare: Fig.2, [0036], “an LL-DAX receive buffer 106 ( cache) connected to the RDMA target layer 105, an LL-DAX host interface (I/F) 107 connected to the LL-DAX receive buffer 106,”), wherein the storage sub-system is to evict overflow from the cache to the RAM (see Kachare: Fig.2, [0043], “the receive buffer size present on the eSSD 101, the maximum RDMA WRITE size supported by the eSSD 101, and the block size. In one or more embodiments, the LL-DAX storage capacity 109 may be dedicated storage capacity inside the eSSD 101 for LL-DAX applications and users. In one or more embodiments, LL-DAX and non-LL-DAX applications can share the same storage capacity inside the eSSD 101 through the file system or other system stack layers (e.g., the LL-DAX and non-LL-DAX applications may exchange data with each othe”); processing circuitry to manage transfer of content between the at least one remote device and the local peripheral storage devices via the at least one peripheral interface and the cache, responsively to the content transfer requests (see Kachare: Fig.2, [0036], “a flash translation layer (FTL) 108 connected to the LL-DAX host interface (I/F) 107, and LL-DAX storage 109 connected to the FTL 108. As described in more detail below, the LL-DAX block storage software 103 in the host device 100 utilizes an LL-DAX protocol to send host commands (e.g., RDMA READ and RDMA WRITE commands) to the RDMA target 105 in the LL-DAX eSSD 101 to obtain low-latency direct access to data stored in the LL-DAX storage 109 (e.g., the LL-DAX block storage software 103 provides storage service to the application 102 or other system software layers at the host 100 and utilizes RDMA READ and RDMA WRITE requests to transfer data to and from the LL-DAX storage in the LL-DAX eSSDs.”) processing circuitry to pace serving the content transfer requests so that while some content transfer requests are being served, other content transfer requests pending serving are queued in at least one pending queue (see Kachare: Fig.2, [0047], “NVMe-oF protocol utilizes RDMA queue pairs (QPs) to transport commands, data, and completions. An NVMe-oF host driver utilizes RDMA SEND requests to send commands to the eSSD. The eSSD utilizes RDMA READ and RDMA WRITE requests for data transfers. The eSSD also utilizes RDMA SEND requests to post completions (e.g., acknowledgements of data persistence) to the host.” Kachare does not teach the data communication apparatus comprising: processing circuitry to use a trained pacing artificial intelligence model to find a pacing action from which to derive a pacing metric for use in pacing the serving the content transfer requests in the storage sub-system. However, Ismailsheriff teaches the data communication apparatus comprising: processing circuitry to use a trained pacing artificial intelligence model (see Ismailsheriff: Fig.4, [0069], “a data store for training data 424, and one or more data stores for storing the output data for the machine learning platform 400, such as a data store for traffic class-specific congestion signatures 426 (example of trained pacing artificial intelligence model) to determine whether a given flow corresponds to a predetermined traffic class and predetermined congestion state, a data store for window size and congestion threshold estimators 428 (example of trained pacing artificial intelligence model) to determine a current window size W.sub.LATEST and/or current congestion threshold T.sub.LATEST for a given flow of a predetermined traffic class and predetermined congestion state, and a data store for STACKing reinforcement learning agents 430 example of trained pacing artificial intelligence model) to determine whether a given flow is suitable for STACKing”), to find a pacing action (see Ismailsheriff: Fig.6, [0024], “The network device can pace or adjust the transmission rate or throughput TR of the flow according to a target transmission rate TR.sub.TGT for the traffic class and congestion state to which the flow corresponds. In some embodiments, the network device may also utilize a reinforcement learning agent to determine whether to perform Selective Tracking of Acknowledgments (STACKing, e.g., stacking acknowledgment information in a network device's main memory or on-chip memory stack instead of the device's interface buffers or off-chip memory) to improve buffer utilization and further improve traffic shaping for one or more network devices.”), from which to derive a pacing metric for use in pacing the serving the content transfer requests in the storage sub-system (see Ismailsheriff: Fig.4, [0070]. “The traffic data collector 404 can capture network traffic data, such as packet traces, session logs, and performance metrics from different layers of the Open Systems Interconnection (OSI) model, the TCP/IP model, or other network model. The traffic data collector 404 can capture the traffic data at various levels of granularity, such as per datagram, unidirectional flow, or bidirectional flow (including TCP flows, connections, sessions, etc.), or other network data unit.”) Because Kachare and Ismailsheriff are in the same/similar field of endeavor managing congestion and pacing traffic in network communication system, accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention to modify Kachare’s storage-request pacing system to include the system that applies artificial intelligence congestion prediction and pacing technique as taught by Ismailsheriff. One would have been motivated to make such a combination to improve the accuracy, responsiveness and adaptability of pacing actions. Regarding Claim 24, As shown above, Kachare and Ismailsheriff teaches all the limitations of Claim 23. Kachare further teaches the system wherein: data is read from and written to the local peripheral storage devices via the cache and the at least one peripheral interface (see Kachare: Fig.2, [0050], “With continued reference to FIG. 6, at operation 205, the LL-DAX HIF logic 107 (see FIG. 2) determines whether the host command is an RDMA READ request or an RDMA WRITE request. If the host command is determined to be an RDMA WRITE command, at operation 206 the LL-DAX HIF logic 107 determines whether the RDMA WRITE command is an LL-DAX WRITE command or an LL-DAX DELETE command (which is transmitted with the RDMA WRITE command) by inspecting the opcode value of the host command.”) Regarding Claim 25, As shown above, Kachare and Ismailsheriff teaches all the limitations of Claim 23. Kachare further teaches the system wherein: the local peripheral storage devices comprise NVMe drives (see Kachare: Fig.2, [0017], “The NVMe SSD devices may include LL-DAX host interface (HIF) logic configured to arbitrate between host commands in two or more RDMA queue pairs (QPs). A first RDMA QP of the two or more RDMA RDMA QPs may be dedicated to a first command and a second RDMA QP of the two or more RDMA RDMA QPs may be dedicated to a second command different than the first command” Regarding Claim 26, As shown above, Kachare and Ismailsheriff teaches all the limitations of Claim 23. Kachare further teaches the system wherein: the content transfer requests comprise RDMA requests (see Kachare: Fig.2, [0038], “The LL-DAX Protocol uses remote direct memory access (RDMA) transport including RDMA READ and RDMA WRITE requests to perform data transfers (i.e., LL-DAX WRITE and LL-DAX READ commands are utilized for data access). RDMA READ requests are utilized to retrieve or fetch data from the eSSD 101. RDMA WRITE requests are utilized to transfer data from the host 100 to the eSSD 101. As shown in Table 1 below, each RDMA READ and RDMA WRITE request includes an opcode (OPC) field, an address (ADDR) field, and a length (LEN) field pertaining to the LL-DAX storage.”) Regarding Claim 27, As shown above, Kachare and Ismailsheriff teaches all the limitations of Claim 23. Kachare further teaches the system wherein: at least one peripheral interface comprise at least one PCIe interface (see Kachare: Fig.2, [0038], “low-latency data access to flash memory in NVMe SSD devices connected over Ethernet according to one embodiment of the present disclosure includes a task of transmitting, from the LL-DAX block storage software layer 103 at the host 100, an RDMA WRITE request to the flash memory including data, a storage address, a length of a data transfer operation, and an operation code” Regarding Claim 8, As shown above, Kachare and Ismailsheriff teaches all the limitations of Claim 23. Kachare further teaches the system wherein: the processing circuitry is to apply the trained pacing artificial intelligence model to find the pacing action responsively to at least one previous state of the storage sub-system (see Ismailsheriff: Fig.5, [0140], “the network controller can distribute the machine learning models generated at step 508 to the network devices for the devices to apply to new traffic data, and the network controller can receive the output of the machine learning models. For example, the network controller can continuously monitor the performance of a machine learning model and when precision, recall, accuracy, and/or other performance metrics (e.g., Table 4) are below certain thresholds or when a new machine learning model improves on the performance of the older machine learning model, the network controller can generate an alert to inform an administrator to update the older machine learning model.”), wherein each previous state of the storage sub-system comprises one or more of the following: a bandwidth of the storage sub-system; a cache hit rate of the storage sub-system; a pacing metric of the storage sub-system; a number of buffers in flight over the storage sub-system; a cache eviction rate; a number of bytes waiting to be processed; a number of bytes of transfer requests received over a given time window; a difference in a number of bytes in flight over the given time window; a number of bytes of the transfer requests completed over a given time window; and a number of bytes to submit over the given time window (see Ismailsheriff: [0070], “traffic data collector 404 can capture network traffic data, such as packet traces, session logs, and performance metrics from different layers of the Open Systems Interconnection (OSI) model, the TCP/IP model, or other network model. The traffic data collector 404 can capture the traffic data at various levels of granularity, such as per datagram, unidirectional flow, or bidirectional flow (including TCP flows, connections, sessions, etc.), or other network data unit.”) See motivation to combine in claim 1. Regarding Claim 29, As shown above, Kachare and Ismailsheriff teaches all the limitations of Claim 23. Kachare further teaches the system wherein: the pacing metric is a pacing period, and the pacing action is a change in the pacing period Regarding Claim 30, As shown above, Kachare and Ismailsheriff teaches all the limitations of Claim 23. Kachare further teaches the system wherein: while serving a particular content transfer request, data written to, or read from, one of the local peripheral storage devices is transferred via a section of the cache, and wherein a same section of the cache is used to transfer a plurality of data chunks associated with a same content transfer request one after the other (see Kachare: Fig.2, [0036], “the LL-DAX block storage software 103 in the host device 100 utilizes an LL-DAX protocol to send host commands (e.g., RDMA READ and RDMA WRITE commands) to the RDMA target 105 in the LL-DAX eSSD 101 to obtain low-latency direct access to data stored in the LL-DAX storage 109 (e.g., the LL-DAX block storage software 103 provides storage service to the application 102 or other system software layers at the host 100 and utilizes RDMA READ and RDMA WRITE requests to transfer data to and from the LL-DAX storage in the LL-DAX eSSDs). In this manner, the system of the present disclosure is configured to bypass a filesystem layer 110, an operating system (OS) layer 111, a block storage layer 112, and an NVMe-oF layer 113 of the host device 100 and obtain low-latency direct access to the data stored in the LL-DAX storage 109 of the LL-DAX eSSD 101.”) Regarding Claim 31, As shown above, Kachare and Ismailsheriff teaches all the limitations of Claim 23. Kachare further teaches the system wherein: the at least one pending queue comprises one or more of the following: a read pending queue and a write pending queue; pending queues for different ones of the local peripheral storage devices; pending queues for different groups of the local peripheral storage devices; pending queues for different peripheral interfaces; pending queues for different content request attributes; or pending queues for different content request initiators (see Kachare: Fig.2, [0017], Regarding Claim 32, As shown above, Kachare and Ismailsheriff teaches all the limitations of Claim 23. Kachare further teaches the system wherein: the processing circuitry is to apply the trained pacing artificial intelligence model to find the pacing action responsively to at least one previous state and at least one previous pacing action of the storage sub-system (see Ismailsheriff: Fig.5, [0140], “At step 510, the network controller can distribute the machine learning models generated at step 508 to the network devices for the devices to apply to new traffic data, and the network controller can receive the output of the machine learning models. For example, the network controller can continuously monitor the performance of a machine learning model and when precision, recall, accuracy, and/or other performance metrics (e.g., Table 4) are below certain thresholds or when a new machine learning model improves on the performance of the older machine learning model, the network controller can generate an alert to inform an administrator to update the older machine learning mode.”) Because Kachare and Ismailsheriff are in the same/similar field of endeavor managing congestion and pacing traffic in network communication system, accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention to modify Kachare’s storage-request pacing system to include the system that applies artificial intelligence congestion prediction and pacing technique as taught by Ismailsheriff. One would have been motivated to make such a combination to improve the accuracy, responsiveness and adaptability of pacing actions. Regarding Claim 33, As shown above, Kachare and Ismailsheriff teaches all the limitations of Claim 23. Kachare further teaches the system wherein: the processing circuitry is to compute the pacing metric responsively to the pacing action (see Ismailsheriff: Fig.5, “The network controller can also update the network devices' existing machine learning models with newer, better performing machine learning models. In addition, the network controller can adjust how and when the network devices perform STACKing to account for the different contexts of the network devices. One of ordinary skill in the art will understand that the network controller can configure numerous other traffic operations of the network devices based on the output of the traffic class-specific congestion signatures 426, the window size and/or congestion threshold estimators 428, the STACKing reinforcement learning agents 430, and other machine learning models.”) Because Kachare and Ismailsheriff are in the same/similar field of endeavor managing congestion and pacing traffic in network communication system, accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention to modify Kachare’s storage-request pacing system to include the system that the processing circuitry is to compute the pacing metric responsively to the pacing action as taught by Ismailsheriff. One would have been motivated to make such a combination to improve the accuracy, responsiveness and adaptability of pacing actions. Regarding Claim 34, As shown above, Kachare and Ismailsheriff teaches all the limitations of Claim 23. Kachare further teaches the system wherein: the processing circuitry is to receive weights for the trained pacing artificial intelligence model and install the weights in the trained pacing artificial intelligence model (see Ismailsheriff: Fig: 5, “Boosting attempts to identify a highly accurate hypothesis (e.g., low error rate) from a combination of many weak hypotheses (e.g., substantial error rate). Given a data set comprising data points within a class and not within the class and weights based on the difficulty of classifying a data point and a weak set of classifiers, boosting can generate and call a new weak classifier in each of a series of rounds. For each call, the distribution of weights may be updated to reflect the importance of the data points in the data set for the classification.”) Because Kachare and Ismailsheriff are in the same/similar field of endeavor managing congestion and pacing traffic in network communication system, accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention to modify Kachare’s the processing circuitry is to receive weights for the trained pacing artificial intelligence model and install the weights in the trained pacing artificial intelligence model as taught by Ismailsheriff. One would have been motivated to make such a combination to improve the accuracy, responsiveness and adaptability of pacing actions. Regarding Claim 35, As shown above, Kachare and Ismailsheriff teaches all the limitations of Claim 23. Kachare further teaches the system wherein: the processing circuitry is to pace commencement of the serving of the content transfer requests so that while the some content transfer requests are being served, the other content transfer requests pending serving are queued in the at least one pending queue (see Ismailsheriff: Fig.5, [0114], “a queuing and dropping policy for each traffic class; examples of the applications, services, or protocols of each traffic class; and a percentage of the bandwidth allocated to each traffic class.”); and the processing circuitry is to use the trained pacing artificial intelligence model to find the pacing action from which to derive the pacing metric for use in the pacing of commencement of the serving of the content transfer requests in the storage sub- system (see Ismailsheriff: Fig.5, [0140], “At step 510, the network controller can distribute the machine learning models generated at step 508 to the network devices for the devices to apply to new traffic data, and the network controller can receive the output of the machine learning models. For example, the network controller can continuously monitor the performance of a machine learning model and when precision, recall, accuracy, and/or other performance metrics (e.g., Table 4) are below certain thresholds or when a new machine learning model improves on the performance of the older machine learning model, the network controller can generate an alert to inform an administrator to update the older machine learning mode.”); Because Kachare and Ismailsheriff are in the same/similar field of endeavor managing congestion and pacing traffic in network communication system, accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention to modify Kachare’s pace commencement of the serving of the content transfer requests so that while the some content transfer requests are being served, the other content transfer requests pending serving are queued in the at least one pending queue as taught by Ismailsheriff. One would have been motivated to make such a combination to improve the accuracy, responsiveness and adaptability of pacing actions. Regarding independent Claim 36, Claim 36 is directed to a method claim and has the same/similar claim limitation as claim 23 and is rejected under the same rationale. Regarding Claim 37, As shown above, Kachare and Ismailsheriff teaches all the limitations of Claim 36. Kachare further teaches the method wherein: the storage sub-system comprises a random-access memory (RAM), and the method further comprises evicting overflow from the cache to the RAM (see Kachare: Fig.2, [0043], “the receive buffer size present on the eSSD 101, the maximum RDMA WRITE size supported by the eSSD 101, and the block size. In one or more embodiments, the LL-DAX storage capacity 109 may be dedicated storage capacity inside the eSSD 101 for LL-DAX applications and users. In one or more embodiments, LL-DAX and non-LL-DAX applications can share the same storage capacity inside the eSSD 101 through the file system or other system stack layers (e.g., the LL-DAX and non-LL-DAX applications may exchange data with each othe”); Regarding Claim 38, Claim 38 is directed to a method claim and has the same/similar claim limitation as claim 24 and is rejected under the same rationale. Regarding Claim 39, Claim 39 is directed to a method claim and has the same/similar claim limitation as claim 28 and is rejected under the same rationale. Regarding Claim 40, Claim 40 is directed to a method claim and has the same/similar claim limitation as claim 29 and is rejected under the same rationale. Regarding Claim 41, Claim 41 is directed to a method claim and has the same/similar claim limitation as claim 30 and is rejected under the same rationale. Regarding Claim 42, Claim 42 is directed to a method claim and has the same/similar claim limitation as claim 31 and is rejected under the same rationale. Response to Arguments Applicant’s arguments with respect to claim amendments have been considered but are moot considering the new combination of references being used in the current rejection. The new combination of references was necessitated by Applicant’s claim amendments. Therefore, the claims are rejected under the new combination of references as indicated above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. PGPUB NUMBER: INVENTOR-INFORMATION: TITLE / DESCRIPTION US 11374858 B2 Seshan; Lakshmi Narasimhan Title: Methods And Systems For Directing Traffic Flows Based On Traffic Flow Classifications Description: he embodiments relate to computer networks, network appliances, network switches, network routers, machine learning, artificial intelligence, using machine learning to classify traffic flows, and to using machine learning to improve hardware resource utilization by network appliance US 20170149665 A1 YOUSAF; Faqir Zarrar Title: METHOD AND SYSTEM FOR MANAGING FLOWS IN A NETWORK Description: The present invention relates to a method for managing flows in a network with a plurality of forwarding elements routing flows between network entities of a network or network domain. The present invention further relates to a system for managing flows in a network with a plurality of forwarding elements routing flows between network entities of a network or network domain. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZELALEM W SHALU whose telephone number is (571)272-3003. The examiner can normally be reached M- F 0800am- 0500pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached on (571) 272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Zelalem Shalu/Examiner, Art Unit 2145 /CESAR B PAULA/Supervisory Patent Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

Apr 05, 2022
Application Filed
Mar 22, 2025
Non-Final Rejection — §103
May 12, 2025
Interview Requested
Jun 05, 2025
Examiner Interview Summary
Jun 05, 2025
Examiner Interview (Telephonic)
Jun 16, 2025
Response Filed
Jun 16, 2025
Response after Non-Final Action
Jun 29, 2025
Response Filed
Aug 28, 2025
Final Rejection — §103
Nov 02, 2025
Request for Continued Examination
Nov 07, 2025
Response after Non-Final Action
Feb 04, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12477016
AUTOMATION OF VISUAL INDICATORS FOR DISTINGUISHING ACTIVE SPEAKERS OF USERS DISPLAYED AS THREE-DIMENSIONAL REPRESENTATIONS
2y 5m to grant Granted Nov 18, 2025
Patent 12468969
METHODS FOR CORRELATED HISTOGRAM CLUSTERING FOR MACHINE LEARNING
2y 5m to grant Granted Nov 11, 2025
Patent 12419611
PATIENT MONITOR, PHYSIOLOGICAL INFORMATION MEASUREMENT SYSTEM, PROGRAM TO BE USED IN PATIENT MONITOR, AND NON-TRANSITORY COMPUTER READABLE MEDIUM IN WHICH PROGRAM TO BE USED IN PATIENT MONITOR IS STORED
2y 5m to grant Granted Sep 23, 2025
Patent 12153783
User Interfaces and Methods for Generating a New Artifact Based on Existing Artifacts
2y 5m to grant Granted Nov 26, 2024
Patent 12120422
SYSTEMS AND METHODS FOR CAPTURING AND DISPLAYING MEDIA DURING AN EVENT
2y 5m to grant Granted Oct 15, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
29%
Grant Probability
48%
With Interview (+19.0%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 108 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month