Prosecution Insights
Last updated: April 19, 2026
Application No. 18/086,433

APPLICATION PROGRAMMING INTERFACE TO TRANSFORM INFORMATION CORRESPONDING TO A MEMORY TRANSACTION

Final Rejection §103
Filed
Dec 21, 2022
Examiner
AHMAD, NAUMAN UDDIN
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
4 (Final)
78%
Grant Probability
Favorable
5-6
OA Rounds
2y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
28 granted / 36 resolved
+15.8% vs TC avg
Strong +20% interview lift
Without
With
+19.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
31 currently pending
Career history
67
Total Applications
across all art units

Statute-Specific Performance

§101
4.8%
-35.2% vs TC avg
§103
68.4%
+28.4% vs TC avg
§102
4.1%
-35.9% vs TC avg
§112
15.8%
-24.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 36 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This Office Action is in response to Applicant’s RCE amendment filed 01/06/2026 which has been entered and made of record. Claims 1-9, 11, and 14-15 have been amended. No claim has been cancelled or newly added. Claims 1-20 are pending in the application. Response to Arguments Applicant's arguments filed 1/6/26 have been fully considered but they are not persuasive. In response to applicant’s argument that “the API functions in Seideman disclose passive tracking and recording of events that have already occurred, not accounting of the transaction itself”, examiner respectfully disagrees. As recited in claim 1, these limitations are taught by Seideman. In particular, and in addition to the citations in claim 1 below, Seideman teaches tracking of events/transaction as they occur, accounting for the transaction itself due to col. 6, lines 1-5 teaching "[C]orrelation record 229 can allow an application or a user to track or trace the flow of data through the network environment 200 as it moves from one system or data store 223 to another system or data store 223 and as it is accessed and modified by various applications.". Tracking a transaction as it moves does not only mean tracking and recording of events that have already occurred, rather these account for tracking of the transaction itself as well since it is done as the data moves. In response to applicant’s argument that “None of the indicated sections discloses "manual transaction accounting."” Examiner respectfully disagrees. The portion of Seideman cited above discloses manual transaction accounting and is consistent with the applicant’s disclosure paragraph 68 which states "manual transaction accounting is referred to as manual tracking. In at least one embodiment, manual transaction accounting is when a user (e.g., computer program code, such as a kernel running on PPU 106) performs one or more aspects of tracking data to be asynchronously moved". Seideman from above (col 6, lines 1-5) mentioning “allow an application or a user to track or trace the flow of data” is consistent with this definition of manual transaction accounting since the user tracking or tracing flow of data is manual tracking. Therefore, the examiner is not persuaded by the applicant. In response to applicant’s argument that “Any accounting in Seideman is predefined and automatically performed by the system, rather than explicitly performed or controlled by a developer through the API”. As recited in claim 1 below, these limitations are taught by Seideman. In particular, and in addition to the citations in claim 1 below, Seideman col. 4, lines 16-20 teach “client application 223 can be executed in a client device 216 to access data stored in a data store 223 or in the distributed ledger 119, thereby rendering a user interface on the display of the client device 216 to manipulate, process, analyze, or visualize the accessed data”. A user interface to manipulate, process, analyze, or visualize the accessed data shows the accounting is explicitly performed by a user/developer using the API cited in Seideman below in claim 1 and/or the API of vasilache (Seideman is not portrayed in the rejection as providing this full teaching, the rejection instead relying on the plurality of references to teach the feature, when applied to using an API). In response to applicant's arguments against the references individually, and that “There is no indication in the cited portions of Vasilache that the API provides functions to perform manual transaction accounting. Instead, the API in Vasilache exposes functions that are limited to translating and defining tensor computations” one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). In response to applicant’s argument that that there is no teaching, suggestion, or motivation to combine the references, [pg. 9-10 of remarks, “Office Action does not articulate a sufficient rationale for relying on Seideman to teach manual transaction accounting… fails to provide a reasoned explanation for applying Seideman's transaction accounting to Vasilache's architecture, and the asserted rationale therefore lacks support in the cited disclosure.”] the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, as cited in the office action for the motivation of combining the two references, improved and ensured security from Seideman would be a motivation to combine the references despite Vasilache not explicitly disclosing any security concern because a person having ordinary skill in the art before the effective filing date of the claimed invention would understand that doing so (and providing “tamper-resistant record”) would keep data safe and protect information from unauthorized access leading to a more secure invention overall and less chance of data breaches and data leaks. In response to applicant's argument that “nor does Seideman provide a rationale for modifying Vasilache in that manner. Doing so would introduce additional delay and complexity into Vasilache's API functionality, with no corresponding technical benefit”, a recitation of the intended use of the claimed invention must result in a structural difference between the claimed invention and the prior art in order to patentably distinguish the claimed invention from the prior art. If the prior art structure is capable of performing the intended use, then it meets the claim. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 4, 8, 14, 18 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vasilache et al. (Tensor Comprehensions: Framework-Agnostic High-Performance Machine Learning Abstractions), hereinafter referenced as Vasilache, in view of Seideman et al. (U.S. Patent No. 11,544,229), hereinafter referenced as Seideman and Marani et al. (U.S. Patent Application Publication No. 2022/0138551), hereinafter referenced as Marani. Regarding claim 1, Vasilache teaches one or more processors comprising: circuitry to in response to an application programming interface (API) call, cause one or more memory transactions (page 31, fig. 13 explanation teaches all libraries that expose tensor comprehensions go through an API, page 12, section 5.1, component 1 teaches folding a complete TC (tensor comprehension) program into a single GPU kernel and page 12, section 5.1, component 4 teaches memory promotion with data transfer); kernel in this case is executed by GPU which is formed of circuits and data transfer is essentially a memory transaction since memory is being accessed; to asynchronously transform a first tensor, (abstract teaches ATen, which is asychnronous tensor library and pages 8-9, section 3.2 as well as fig. 2 teach data layout transformations done for tensor comprehension which uses the ATen library); the transformations are said to be done by composing operations on tensor metadata which along with fig. 2 shows an asynchronous tensor transformation being performed, also, the data layout transformations in section 3.2 are for tensor comprehensions and fig.2 shows a tensor comprehension go through intermediate transformation steps before being executed; wherein the API is to receive one or more source memory locations and one more destination memory locations for the one or more memory transactions (page 12, section 5.1, component 4 teaches data transfers to and from shared and private memory and page 31, fig. 13 teaches using an API to exchange tensor data); data transfer shows memory transactions and it is done between the source/shared and destination/private memory, in order to do this, respective memory locations must first be obtained which is all a process of tensor data exchange as done by the API of fig. 13. However, Vasilache fails to teach transform a first tensor into a different second tensor; and to provide one or more functions to perform manual transaction accounting for the one or more memory transactions. However, Seideman teaches and to provide one or more functions to perform manual transaction accounting for the one or more memory transactions (Seideman, col. 6, lines 1-5 teach user can track flow of data, col. 3 lines 46-49 teach data tracker can store/create event records in ledger for data being accessed, stored, modified, copied or deleted and col. 3 lines 56-63 teach an API function to notify data tracker of details of an event of data store); the memory transactions are when the data is being accessed/stored, user tracking data shows the transaction tracking done manually and the function provided by the API ensures the accounting/tracking is done. Seideman is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of memory transactions alongside API(s) and tracking data. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Vasilache’s invention with the data tracking techniques of Seideman to provide an auditable and tamper-resistant record of the transaction (Seideman, col. 1, lines 54-57). This ensures security and trust for data within an application and/or system. However, the combination of Vasilache and Seideman fails to teach transform a first tensor into a different second tensor. However, Marani teaches transform a first tensor into a different second tensor (Marani, paragraph 32 teaches “the transforming of the first tensor into the second tensor and subsequently writing the second tensor to the second storage using the second stride that is related to a multiple of the first stride” and paragraph 38 teaches “Advantageously, having a second stride that is greater than the first stride, and perhaps is a multiple of the first stride, results in gaps in memory when the transformed first tensor (that is, the second tensor) is written to the second storage”); second stride greater than first stride shows second tensor being different than the first after it is transformed. Marani is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of tensor transformations from first tensor to different second tensor. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Vasilache and Seideman with the tensor transformation techniques of Marani so that transforming the first tensor into the second tensor, and writing the second tensor to the second storage may be performed in a first iteration of an iterative process and in response to partial completion of the first iteration such that there is some overlap between the performance of the first and second iterations, speeding up completion of the overall iterative process (Marani, paragraph 42). This would mean a faster more efficient application/system overall. Regarding claim 4, the combination of Vasilache, Seideman and Marani teaches, wherein the API is to receive as input information indicating a source memory location and a destination memory location to be used to perform the one or more memory transactions (Vasilache, page 12, section 5.1, component 4 teaches data transfers to and from shared and private memory and page 31, fig. 13 teaches using an API to exchange tensor data); data transfer shows memory transactions and it is done between the source/shared and destination/private memory, in order to do this, respective memory locations must first be obtained which is all a process of tensor data exchange as done by the API of fig. 13. Regarding claim 8, a system recites similar limitations as product/processor claim 1, and thus is rejected under similar rationale Method claim 14 recites similar limitations as product/processor claim 1, and thus is rejected under similar rationale. Regarding claim 18, the combination of Vasilache, Seideman and Marani wherein the API is to receive as input an identifier of information to be used to perform transaction accounting (Seideman, col. 4, lines 62-63 teach a trace identifier to uniquely identify a transaction); since the identifier is used to identify the transaction, it is used to perform the accounting of the transaction itself. The same motivations used in claim 1 apply here in claim 18. Regarding claim 20, a non-transitory computer-readable medium recites similar limitations as product/processor claim 1, and thus is rejected under similar rationale. Claim(s) 2, 9-10, 12, and 15-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Vasilache, Seideman and Marani as applied to claims 1, 8 and 14 above, and further in view of Hardwick et al. (U.S. Patent Application Publication No. 2022/0303178), hereinafter referenced as Hardwick. Regarding claim 2, the combination of Vasilache, Seideman and Marani fails to explicitly teach wherein the one or more memory transactions are asynchronous reduction operations to be performed by a graphics processing unit (GPU). However, Hardwick teaches wherein the one or more memory transactions are asynchronous reduction operations to be performed by a graphics processing unit (GPU) (Hardwick, paragraph 29 teaches simplifying application's code to reduce the number of asynchronous operations and paragraph 60 teaches the subject matter described can be implemented/performed by a GPU). As depicted in Hardwick, fig. 6, the applications code 625 and data 626 are within system memory 620, so they could include the aforementioned memory transaction(s) since the memory transactions would typically occur in system memory. Application code are instructions stored in memory and anytime access to the code and/or data is made, it’s considered a memory transaction since memory is being accessed. Hardwick is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of performing memory transactions and asynchronous reduction operations on specific hardware. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Vasilache, Seideman and Marani with the reduction of operation techniques of Hardwick to reduce latency (Hardwick, paragraph 29). Regarding claim 9, the system claim is similar to product/processor claim 2, and thus is rejected under similar rationale. Regarding claim 10, the combination of Vasilache, Seideman, Marani, and Hardwick teaches wherein the one or more memory transactions are asynchronous reduction operations to be performed by a graphics processing unit (GPU) (Hardwick, paragraph 29 teaches simplifying application's code to reduce the number of asynchronous operations and paragraph 60 teaches the subject matter described can be implemented/performed by a GPU). As depicted in Hardwick, fig. 6, the applications code 625 and data 626 are within system memory 620, so they could include the aforementioned memory transaction(s) since the memory transactions would typically occur in system memory. Application code are instructions stored in memory and anytime access to the code and/or data is made, it’s considered a memory transaction since memory is being accessed. The same motivations used in claim 2 apply here in claim 10. Regarding claim 12, the combination of Vasilache, Seideman, Marani, and Hardwick teaches wherein the API is to receive as input one or more characteristics of data to be transformed (Hardwick, paragraph 100 teaches data including a version identifier that indicates a format of the byte array of application data). A version identifier indicating a format is a characteristic of data. The API would receive this characteristic of data because it is involving the data to be transformed from the performance of the API of Vasilache. The same motivations used in claim 2 apply here in claim 12. Method claim 15 is similar to product/processor claim 2, and thus is rejected under similar rationale. Regarding claim 16, the combination of Vasilache, Seideman, Marani, and Hardwick teaches wherein the API is to perform a reduction operation (Hardwick, paragraph 29 teaches simplifying application's code to reduce the number of asynchronous operations). The same motivations used in claim 2 apply here in claim 16. Claim(s) 5-7, 13, 17 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Vasilache, Seideman and Marani as applied to claims 1, 8 and 14 above, and further in view of Kerr et al. (U.S. Patent Application Publication No. 2021/0124582), hereinafter referenced as Kerr. Regarding claim 5, the combination of Vasilache, Seideman and Marani fails to teach wherein the API is to receive as input information indicating a shape of data to be copied using the one or more memory transactions. However, Kerr teaches wherein the API is to receive as input information indicating a shape of data to be copied using the one or more memory transactions (Kerr, paragraph 130 teaches the GPU receiving 3D graphics data to generate 2D image data). If the GPU of Vasilache receives information/data which is 3D, the input has an indicator of a three-dimensional shape associated with the data that will be copied and used in the memory transaction(s). Kerr is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of accessing and manipulating data using various memories of the GPU. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Vasilache, Seideman and Marani to incorporate the teachings of Kerr to reduce latency and provide higher bandwidth (Kerr, paragraph 56). This would mean improved computational efficiency as well as faster data access. Regarding claim 6, the combination of Vasilache, Seideman, Marani and Kerr teaches wherein the one or more memory transactions are to move data between shared memory of a graphics processing unit (GPU) and global memory of the GPU (Kerr, paragraphs 5 and 9 teach GPU having various types of memories, paragraph 50 teaches parallel processors with access to global memory, paragraph 146 teaches the parallel processing unit comprises a GPU and paragraph 71 teaches transferring data from first memory (e.g., global memory) to second a memory (e.g., shared memory). Hence both referred to memories can be part of GPU and data moves across/between the memories. In addition, Vasilache, page 12, section 5.1, component 4 teaches data transfers to and from shared and private memory. This correlates to applicant’s specification, paragraph 72 defines global memory accessible by entire GPU (act likes shared memory) and shared memory accessible by a particular group of thread (act like private memory). The same motivation used in claim 5 for Kerr applies here for claim 6. Regarding claim 7, the combination of Vasilache, Seideman, Marani and Kerr teaches wherein the API is to provide to a user an indication of one or more hardware units to be used to perform the one or more memory transactions (Kerr, paragraph 161 teaches an API provides an abstraction for a programmer that lets a programmer utilize specialized graphics hardware for generating data and paragraph 71 teaches GPU may retrieve/transferring data from a first memory to second memory”). The abstraction acts as an indication and would have to show the hardware selected by the programmer/user to be used since the user must select it. Also, retrieving data from GPU explicitly shows the one or more memory transactions being performed. The same motivation used in claim 5 for Kerr applies here for claim 7. Regarding claim 13, the combination of Vasilache, Seideman, Marani and Kerr teaches wherein the API is to indicate whether a particular hardware unit on a graphics processing unit (GPU) is to perform the one or more memory transactions (Kerr, paragraph 161 teaches an API provides an abstraction for a programmer that lets a programmer utilize specialized graphics hardware for generating data and paragraph 71 teaches GPU may retrieve/transferring data from a first memory to second memory”). The abstraction acts as an indication and would have to show the hardware selected by the programmer/user to be used since the user must select it. Since utilizing specialized graphics hardware is mentioned, a particular hardware unit on a GPU would fall under that category. Also, retrieving data from GPU explicitly shows the one or more memory transactions being performed. The same motivation used in claim 5 for Kerr applies here for claim 13. Regarding claim 17, the combination of Vasilache, Seideman, Marani and Kerr teaches wherein the one or more memory transactions are to be performed from a first memory of a graphics processing unit (GPU) (Kerr, paragraph 71 teaches retrieving data from GPU); this retrieval is accessing memory so it is a memory transaction done by the GPU; and a second memory of the GPU (Kerr, Paragraphs 5 and 9 teach GPU having various types of memories, paragraph 50 teaches parallel processors with access to global memory, paragraph 146 teaches the parallel processing unit comprises a GPU and paragraph 71 teaches transferring data from a first memory to second memory). The same motivations used in claim 5 for Kerr apply here in claim 17. Regarding claim 19, the combination of Vasilache, Seideman, Marani and Kerr teaches wherein the API is to be performed using global memory of a graphics processing unit (GPU) and shared memory of the GPU (Kerr, paragraphs 5 and 9 teach GPU having various types of memories, paragraph 50 teaches parallel processors with access to global memory, paragraph 146 teaches the parallel processing unit comprises a GPU and paragraph 71 teaches transferring data from first memory (e.g., global memory) to second a memory (e.g., shared memory). Hence both referred to memories can be part of GPU and data moves across/between the memories. The data would be of the one or more memory transactions meaning the API would be performed using the mentioned first memory (e.g., global memory) and second memory (e.g., shared memory) as well. In addition, Vasilache, page 12, section 5.1, component 4 teaches data transfers to and from shared and private memory. This correlates to applicant’s specification, paragraph 72 defines global memory accessible by entire GPU (act likes shared memory) and shared memory accessible by a particular group of thread (act like private memory). The same motivation used in claim 5 for Kerr applies here for claim 19. Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Vasilache, Seideman and Marani as applied to claim 1 above, and further in view of Petit (U.S. Patent Application Publication No. 2019/0384613), hereinafter referenced as Petit. Regarding claim 11, the combination of Vasilache, Seideman and Marani fails teach API is to receive as input a reduction operation to be performed by a graphics processing unit (GPU). However, Petit teaches API is to receive as input a reduction operation to be performed by a graphics processing unit (GPU) (Petit, paragraph 79 teaches “in an embodiment, the program to be executed by the graphics processor and that includes the reduction operation, is for performing more general “compute” processing (rather than graphics processing per se), such as in accordance with the OpenCL or Vulkan APIs, or other forms of kernel execution.”); this shows reduction operation performed by GPU and since in accordance with API(s), the API is to receive it as input. Petit is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of using GPU and API for reduction operations. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Vasilache, Seideman and Marani with the reduction operation techniques of Petit to make execution of the reduction operation more efficient (Petit, paragraph 76). This is done by the specific implementation of the reduction operation, leading to a faster system and enhanced user experience. Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Vasilache, Seideman and Marani as applied to claim 1 above, and further in view of Hardwick and Kerr. Regarding claim 3, the combination of Vasilache, Seideman and Marani fails to teach wherein the one or more memory transactions are asynchronous reduction operations to be performed by a graphics processing unit (GPU), and the one or more memory transactions include moving data from a first memory of the GPU and a second memory of the GPU. However, Hardwick teaches wherein the one or more memory transactions are asynchronous reduction operations to be performed by a graphics processing unit (GPU) (Hardwick, paragraph 29 teaches simplifying application's code to reduce the number of asynchronous operations and paragraph 60 teaches the subject matter described can be implemented/performed by a GPU). As depicted in Hardwick, fig. 6, the applications code 625 and data 626 are within system memory 620, so they could include the aforementioned memory transaction(s) since the memory transactions would typically occur in system memory. Application code are instructions stored in memory and anytime access to the code and/or data is made, it’s considered a memory transaction since memory is being accessed. The same motivations used in claim 2 for Hardwick apply here in claim 3. However, the combination of Vasilache, Seideman, Marani and Hardwick fails to teach and the one or more memory transactions include moving data from a first memory of the GPU and a second memory of the GPU. However, Kerr teaches and the one or more memory transactions include moving data from a first memory of the GPU and a second memory of the GPU (Kerr, paragraphs 5 and 9 teach GPU having various types of memories, paragraph 50 teaches parallel processors with access to global memory, paragraph 146 teaches the parallel processing unit comprises a GPU and paragraph 71 teaches transferring data from a first memory to a second memory). The same motivations used in claim 5 for Kerr apply here in claim 3. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NAUMAN U AHMAD whose telephone number is (703)756-5306. The examiner can normally be reached Monday - Friday 9:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571) 272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611 /N.U.A./Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Dec 21, 2022
Application Filed
Nov 12, 2024
Non-Final Rejection — §103
Feb 18, 2025
Interview Requested
Feb 27, 2025
Examiner Interview Summary
Feb 27, 2025
Applicant Interview (Telephonic)
Mar 18, 2025
Response Filed
Apr 07, 2025
Final Rejection — §103
May 02, 2025
Interview Requested
May 13, 2025
Examiner Interview Summary
May 13, 2025
Applicant Interview (Telephonic)
Jul 11, 2025
Request for Continued Examination
Jul 14, 2025
Response after Non-Final Action
Aug 04, 2025
Non-Final Rejection — §103
Dec 03, 2025
Applicant Interview (Telephonic)
Dec 03, 2025
Examiner Interview Summary
Jan 06, 2026
Response Filed
Jan 26, 2026
Final Rejection — §103
Apr 02, 2026
Examiner Interview Summary
Apr 02, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592036
BLENDING ELEVATION DATA INTO A SEAMLESS HEIGHTFIELD
2y 5m to grant Granted Mar 31, 2026
Patent 12530807
METHODS AND SYSTEMS FOR COMPRESSING DIGITAL ELEVATION MODEL DATA
2y 5m to grant Granted Jan 20, 2026
Patent 12518472
DEFORMABLE NEURAL RADIANCE FIELDS
2y 5m to grant Granted Jan 06, 2026
Patent 12518482
VIRTUAL REPRESENTATIVE CONDITIONING SYSTEM
2y 5m to grant Granted Jan 06, 2026
Patent 12505601
CONTENT DISPLAY CONTROL DEVICE, CONTENT DISPLAY CONTROL METHOD, AND STORAGE MEDIUM STORING CONTENT DISPLAY CONTROL PROGRAM
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
78%
Grant Probability
98%
With Interview (+19.8%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 36 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month