Prosecution Insights
Last updated: April 19, 2026
Application No. 17/548,412

APPLICATION PROGRAMMING INTERFACES FOR INTEROPERABILITY

Final Rejection §102
Filed
Dec 10, 2021
Examiner
RONI, SYED A
Art Unit
2432
Tech Center
2400 — Computer Networks
Assignee
Nvidia Corporation
OA Round
4 (Final)
82%
Grant Probability
Favorable
5-6
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
537 granted / 655 resolved
+24.0% vs TC avg
Strong +22% interview lift
Without
With
+22.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
26 currently pending
Career history
681
Total Applications
across all art units

Statute-Specific Performance

§101
14.5%
-25.5% vs TC avg
§103
33.1%
-6.9% vs TC avg
§102
31.1%
-8.9% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 655 resolved cases

Office Action

§102
DETAILED ACTION Examiner Note 713.09 Interviews Between Final Rejection and Notice of Appeal [R-08.2017] Normally, one interview after final rejection is permitted in order to place the application in condition for allowance or to resolve issues prior to appeal. However, prior to the interview, the intended purpose and content of the interview should be presented briefly, preferably in writing. Such an interview may be granted if the examiner is convinced that disposal or clarification for appeal may be accomplished with only nominal further consideration. Interviews merely to restate arguments of record or to discuss new limitations which would require more than nominal reconsideration or new search should be denied. See MPEP § 714.13. Interviews may be held after the expiration of the shortened statutory period and prior to the maximum permitted statutory period of 6 months without an extension of time. See MPEP § 706.07(f). A second or further interview after a final rejection may be held if the examiner is convinced that it will expedite the issues for appeal or disposal of the application. For interviews after notice of appeal, see MPEP § 1204.03. Authorization for Internet Communications The examiner encourages Applicant to submit an authorization to communicate with the examiner via the Internet by making the following statement (from MPEP 502.03): “Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with the undersigned and practitioners in accordance with 37 CFR 1.33 and 37 CFR 1.34 concerning any subject matter of this application by video conferencing, instant messaging, or electronic mail. I understand that a copy of these communications will be made of record in the application file.” Please note that the above statement can only be submitted via Central Fax (not Examiner's Fax), Regular postal mail, or EFS Web using PTO/SB/439. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 09/30/2025 and 11/12/2025 are being considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 3 – 10, and 12 - 30 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by the prior art of record, Ashwathnarayan et al., (US 2020/0364088 A1) (hereinafter “Ashwathnarayan”) (submitted by the applicant via IDS 03/15/2023). Ashwathnarayan discloses; Regarding claim 1, one or more processors [i.e., (see figure 12) Directed Acyclic Graph of heterogeneous processors (CPU, camara, iGPU, dGPU i.e., (para 0162 – 063) processors coordinated via synchronization objects], comprising: circuity to in response to perform a second application programming interface (API) call of a second software library [i.e., (figures 1 and 6) frame-level API library and parallel computing API model i.e., (para 0108) parallel computing platform API tasks enqueued and synchronized i.e., (para 0162) multiple UMDs/APIs signaling and waiting], update a timeline semaphore [i.e., (figure 11) semaphore acquire/release with address and value update i.e., (para 0128 – 0129) SignalExternalSemaphoresAsync enqueues semaphore release operations] at a memory address indicated by a first API of a first software library [i.e., (figure 11) steps showing calculation of semaphore address from semaphore pool i.e., (para 0129) semaphores offset and threshold value written when signaled] based, at least in part, on an identification of a parameter of a handle that indicates the memory address for the timeline semaphore by the second API [i.e., (see steps 812 – 816 of figure 8) provide handle; map parameters into UMD space i.e., (para 0106 – 0109) object handles, SciSyncObj pointer, semaphore descriptors reference handle parameters], wherein the first API created the timeline semaphore and exported the handle of the timeline semaphore [i.e., (see step 812 of figure 8) provide Handle to be used by the plurality of UMDs i.e., (para 0109) SciSyncObj created and referenced via handle pointer for import as external semaphore]. Regarding claim 3, the one or more processors of claim 1, wherein the second API imports the handle for the timeline semaphore by at least creating a data structure corresponding to the handle for the timeline semaphore [i.e., (figure 10) Unified Sync imported, mapped, and associated with internal data structures i.e., (para 0109) semaphore descriptors reference SciSyncObj pointer]. Regarding claim 4, the one or more processors of claim 1, wherein the identification of the parameter corresponds to an identification of the handle for the timeline semaphore [i.e., (para 0106 – 0109) handle parameters used to identify internal attributes and mapping i.e., (see 816 figure 8) parameter mapping]. Regarding claim 5, the one or more processors of claim 1, wherein the circuitry is to perform a workload with an operation that references the handle [i.e., (figure 9 and 11) workloads enqueued on streams that wait on or signal semaphore handles i.e., (para 0108)]. Regarding claim 6, the one or more processors of claim 1, wherein the timeline semaphore corresponds to a monotonically increasing integer [i.e., (para 0129) semaphore thresholdValue written upon signal i.e., (figure 11) incrementing semaphore value upon release]. Regarding claim 7, the one or more processors of claim 1, wherein the parameter of the timeline semaphore is increased by one or more when it is signaled by a first driver corresponding to the first API or when it is signaled by a second driver corresponding to the second API [i.e., (para 0128 – 0129) driver-issued SignalExternalSeamphoresAsync causes semaphore release i.e., (figure 44B – 44D) UMD Signaler/App Signaler increment semantics]. Regarding claim 8, the one or more processors of claim 1, wherein the circuitry is to receive the indication of the timeline semaphore from an application, and wherein the application received the indication from the first API [i.e., (figure 44A – 44D) application receives UnifiedSync/semaphore object from API i.e., (para 0162)]. Regarding claim 9, the one or more processors of claim 1, wherein the timeline semaphore corresponds to synchronizing a first workload and second workload [i.e., (figure 9) device queue and stream synchronization i.e., (figure 12) synchronization between heterogenous workloads i.e., (para 0162 – 0163)]. Regarding claim 10, a system [i.e., (see figure 17)], comprising a memory storing instructions that, as a result of execution by one or more processors [i.e., (see figures 17 and 19)], cause the system to: in response to perform a second application programming interface (API) call of a second software library [i.e., (figures 1 and 6) frame-level API library and parallel computing API model i.e., (para 0108) parallel computing platform API tasks enqueued and synchronized i.e., (para 0162) multiple UMDs/APIs signaling and waiting], update a timeline semaphore [i.e., (figure 11) semaphore acquire/release with address and value update i.e., (para 0128 – 0129) SignalExternalSemaphoresAsync enqueues semaphore release operations] at a memory address indicated by a first API of a first software library [i.e., (figure 11) steps showing calculation of semaphore address from semaphore pool i.e., (para 0129) semaphores offset and threshold value written when signaled] based, at least in part, on an identification of a parameter of a handle that indicates the memory address for the timeline semaphore by the second API [i.e., (see steps 812 – 816 of figure 8) provide handle; map parameters into UMD space i.e., (para 0106 – 0109) object handles, SciSyncObj pointer, semaphore descriptors reference handle parameters], wherein the first API created the timeline semaphore and exported the handle of the timeline semaphore [i.e., (see step 812 of figure 8) provide Handle to be used by the plurality of UMDs i.e., (para 0109) SciSyncObj created and referenced via handle pointer for import as external semaphore]. Regarding claim 12, the system of claim 10, wherein the second API is to identify the handle for the timeline semaphore during import based, at least in part, on the identification of the parameter of the handle or a parameter of a call of the second API [i.e., (para 0106) and (step 816 figure 8) handle identification during import]. Regarding claim 13, the system of claim 10, wherein the one or more processors is to perform a workload with an operation that references the handle [i.e., (figures 9 and 11) workload referencing handle]. Regarding claim 14, the system of claim 10, wherein the first API created the timeline semaphore and exported the handle, and wherein the indication corresponds to a data structure for the handle for the timeline semaphore imported by the second API [i.e., (para 0109) and (figure 10) imported data structure corresponds to handle]. Regarding claim 15, the system of claim 10, wherein the timeline semaphore corresponds to controlling access to a computing resource [i.e., (para 0162 – 0163) controlling access to shared computing resources]. Regarding claim 16, the system of claim 10, wherein the timeline semaphore is to be referenced by a first stream and a second stream, and wherein the first stream and the second stream are to be synchronized based on reading a value corresponding to the timeline semaphore [i.e., (figures 9 and 11) streams synchronized by semaphore value]. Regarding claim 17, a non-transitory machine-readable medium having stored thereon one or more instructions [i.e., (see figures 17 and 19)], which if performed by one or more processors, cause one or more processors to at least [i.e., (para 0108 – 0129)]: in response to perform a second application programming interface (API) call of a second software library [i.e., (figures 1 and 6) frame-level API library and parallel computing API model i.e., (para 0108) parallel computing platform API tasks enqueued and synchronized i.e., (para 0162) multiple UMDs/APIs signaling and waiting], update a timeline semaphore [i.e., (figure 11) semaphore acquire/release with address and value update i.e., (para 0128 – 0129) SignalExternalSemaphoresAsync enqueues semaphore release operations] at a memory address indicated by a first API of a first software library [i.e., (figure 11) steps showing calculation of semaphore address from semaphore pool i.e., (para 0129) semaphores offset and threshold value written when signaled] based, at least in part, on an identification of a parameter of a handle that indicates the memory address for the timeline semaphore by the second API [i.e., (see steps 812 – 816 of figure 8) provide handle; map parameters into UMD space i.e., (para 0106 – 0109) object handles, SciSyncObj pointer, semaphore descriptors reference handle parameters], wherein the first API created the timeline semaphore and exported the handle of the timeline semaphore [i.e., (see step 812 of figure 8) provide Handle to be used by the plurality of UMDs i.e., (para 0109) SciSyncObj created and referenced via handle pointer for import as external semaphore]. Regarding claim 18, the non-transitory machine-readable medium of claim 17, wherein the one or more instructions further cause the one or more processors to at least: create, by the first API, the timeline semaphore [i.e., (figures 10 – 11), (para 0128 – 0129) creates semaphore], and signal, with a driver, the timeline semaphore based on the handle, and wherein to signal includes an operation that causes the timeline semaphore to modify the parameter [i.e., (figures 10 – 11), (para 0128 – 0129) creates semaphore and signal via driver]. Regarding claim 19, the non-transitory machine-readable medium of claim 18, wherein the parameter of the timeline semaphore is increased by a value of one or more when it is signaled by a driver [i.e., (para 0129) semaphore parameter increased when signaled]. Regarding claim 20, the non-transitory machine-readable medium of claim 18, wherein the handle is a pointer for an operation system to read or write data at the memory address for the timeline semaphore [i.e., (para 0109), (figure 11) handle is pointer to memory address]. Regarding claim 21, the non-transitory machine-readable medium of claim 17, wherein the one or more instructions further cause the one or more processors to at least: generate a first work stream and a second work stream, and wherein the first work stream and the second work stream are synchronized based on operations corresponding to the timeline semaphore [i.e., (figure 9) shows a device queue and a parallel computing API stream coordinating using SciSyncFence i.e., (figure 11) i.e., (para 0108 – 0109) describes work submitted to queues and streams, generating fences and waiting across streams using imported semaphore i.e., (para 0128 – 0129) semaphore release operations include writing a threshold value used for synchronization]. Regarding claim 22 the non-transitory machine-readable medium of claim 17, wherein the first API has a queue of operations, and wherein the queue of operations includes a wait operation that references the timeline semaphore [i.e., (figure 9) device queue waits on SciSyncFence generated by another queue i.e., (figure 11) streamWaitEvent(stream, WaitEvent) corresponds to wait operation referencing a semaphore i.e., (para 0108) device queue waits for generated SciSyncFence i.e., (para 0123 – 0126) wait operations reference external semaphores passed as parameters]. Regarding claim 23, the non-transitory machine-readable medium of claim 17, wherein the first software library and the second software library reference the timeline semaphore to synchronize operations for graphics processing [i.e., (figure 13) CUDA and NVMedia/OpenGL interop synchronization via SciSync i.e., (para 0162 – 0163) synchronization object coordinates execution and memory access across UMDs including graphics i.e., (figure 12) GPU synchronization via semaphore between processing stages i.e., (para 0164) SciSync correlates CUDA events and graphics synchronization]. Regarding claim 24, a method comprising: Identifying a timeline semaphore at an address in memory from a first application programming interface (API) of a first software library [i.e., (figures 10 and 11) semaphore pool address calculates and used]; generating a handle indicating the address of the timeline semaphore using the first software library [i.e., (see step 812 of figure 8) handle provided to UMDs]; causing the timeline semaphore to be imported to a second software library based, at least in part, on an identification of a parameter of the handle [i.e., (para 0109) external semaphore import uses a descriptor referencing a SciSyncobj pointer i.e., (figure 10) UnifiedSync imported and associated with internal data structures i.e., (para 0122 – 0124) wait APIs accept parameters identifying external semaphore handles]; and in response to receiving a second API call of the second software library, update the timeline semaphore at the address in memory indicated by the handle [i.e., (para 0128) SignalExternalSemaphoresAsync() enqueues a signal operation i.e., (para 0129) signal operation writes semaphore offset and threshold value i.e., (see figure 11) semaphore release operation modifies semaphore at calculated memory address]. Regarding claim 25, the method of claim 24, wherein the method further comprises: signaling, with a driver, the timeline semaphore based on the handle that references the address for the timeline semaphore, wherein another driver also signals the timeline semaphore [i.e., (figure 44B – 44D) App Signaler and UMD Signaler both signal UnifiedSync i.e., (para 0128) signal operations enqueued by driver APIs i.e., (para 0162 – 0163) synchronization across multiple UMDs and driver]. Regarding claim 26, the method of claim 24, wherein the parameter of the handle comprises a counter parameter or wait parameter of the timeline semaphore [i.e., (0129) semaphore signal includes <offset, thresholdValue> i.e., (figure 11) semaphore acquire/release uses value parameters i.e., (para 0164) SciSync abstracts semaphore counters and synpoints]. Regarding claim 27, the method of claim 24, wherein causing the timeline semaphore to be imported further comprises: requesting, by an application, that the first API create and export the handle corresponding to the timeline semaphore [i.e., (figure 8) request allocation attributes; create object; export handle], providing, by the application, the exported handle to the second API [i.e., (para 0106 – 0109) handles duplicated, passed, and mapped into UMD space], identifying, by the second API, a parameter that indicates the exported handle; and importing the exported handle [i.e., (figure 10) UnifiedSync creation, export, and import flow]. Regarding claim 28, the method of claim 27, wherein the request for the application corresponds to graphics processing and/or image rendering, and wherein the application causes the first API to be used for a portion of the processing and/or a portion of the image rendering [i.e., (figure 1 and 13) CUDA, OpenGL, NVMedia graphics interoperability (para 0162 – 0163) synchronization of graphics and compute workloads i.e., (figure 12) GPU processing stages synchronized via semaphore]. Regarding claim 29, the method of claim 24, wherein the method further comprises: signaling the timeline semaphore, wherein signaling includes causing a parameter of the timeline semaphore to increase in value [i.e., (para 0129) semaphore threshold value increases when signaled]; and releasing references to the timeline semaphore [i.e., (figure 11) release operation modifies semaphore state i.e., (para 0106) deallocation and removal of UMD references after use]. Regarding claim 30, the method of claim 24, the method further comprising: providing a first queue [i.e., (figure 9) device queue and parallel computing API stream coordinated via semaphore i.e., (figure 11) multiple stream (streamA, stream) synchronized using semaphore count value]; providing a first stream [i.e., (para 0108 – 0109 and 0129) operations corresponding to semaphore threshold/count values]; and providing a second stream, and wherein the first queue, the first stream, and the second stream have operations that correspond to a count value of the timeline semaphore [i.e., (para 0108 – 0109 and 0129) operations corresponding to semaphore threshold/count values]. Response to Arguments Applicant’s arguments with respect to pending claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SYED A RONI whose telephone number is (571)270-7806. The examiner can normally be reached M-F 9:00-5:00 pm (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey L Nickerson can be reached at (469) 295-9235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SYED A RONI/Primary Examiner, Art Unit 2432
Read full office action

Prosecution Timeline

Dec 10, 2021
Application Filed
Aug 12, 2023
Non-Final Rejection — §102
Feb 20, 2024
Response Filed
May 09, 2024
Final Rejection — §102
Nov 14, 2024
Notice of Allowance
May 14, 2025
Request for Continued Examination
May 24, 2025
Response after Non-Final Action
Aug 15, 2025
Non-Final Rejection — §102
Nov 04, 2025
Interview Requested
Nov 17, 2025
Applicant Interview (Telephonic)
Nov 17, 2025
Examiner Interview Summary
Nov 19, 2025
Response Filed
Feb 09, 2026
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591684
CENTRALIZED SECURITY ANALYSIS AND MANAGEMENT OF SOURCE CODE IN NETWORK ENVIRONMENTS
2y 5m to grant Granted Mar 31, 2026
Patent 12574354
CLIENT FILTER VPN
2y 5m to grant Granted Mar 10, 2026
Patent 12572379
Static Trusted Execution Environment for Inter-Architecture Processor Program Compatibility
2y 5m to grant Granted Mar 10, 2026
Patent 12561420
SYSTEM AND METHOD FOR AUTHENTICATING USERS VIA PATTERN BASED DIGITAL RESOURCES ON A DISTRIBUTED DEVELOPMENT PLATFORM
2y 5m to grant Granted Feb 24, 2026
Patent 12547760
METHOD FOR EVALUATING THE RISK OF RE-IDENTIFICATION OF ANONYMISED DATA
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+22.0%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 655 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month