Prosecution Insights
Last updated: April 19, 2026
Application No. 18/086,451

APPLICATION PROGRAMMING INTERFACE TO GENERATE A TENSOR MAPPING

Final Rejection §101§103
Filed
Dec 21, 2022
Examiner
SEYE, ABDOU K
Art Unit
2198
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
480 granted / 583 resolved
+27.3% vs TC avg
Strong +28% interview lift
Without
With
+27.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
38 currently pending
Career history
621
Total Applications
across all art units

Statute-Specific Performance

§101
21.6%
-18.4% vs TC avg
§103
54.6%
+14.6% vs TC avg
§102
2.8%
-37.2% vs TC avg
§112
13.0%
-27.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 583 resolved cases

Office Action

§101 §103
DETAILED ACTION Statement of claims The present amended application includes: Claims 1-9 and 14-15 were amended. Claims 1-20 remain pending in the application. Claims 1-20 are being considered on the merits. Information Disclosure Statement The information disclosure statement (IDS) submitted on 07/23/2025, 08/29/2025, 10/31/2025, 12/19/2025, 01/21/2026, 03/03/2026. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Patent Eligible Subject Matter Under 35 U.S.C. § 101 Applicant argues that: “ Amended claim 1, which for the purpose of this discussion is representative of claims 8 and 14, recites, in part:"circuitry to, in response to a call to an application programming interface (API), cause a data structure that includes a mapping from a first tensor to a second tensor to be generated." Applicant respectfully submits that claim 1 does not recite an abstract idea.” . In response Examiner respectfully disagree and submit that: Claims 21 and 30-31 taken alone, the additional elements do not amount to significantly more the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functionality of the computer itself. Accordingly, the additional elements “circuitry” merely recite the generic computer or computer components for carrying out or applying the abstract idea, do not amount to significantly more than the abstract idea and cannot provide an inventive concept. Therefore, In view of the amendment and applicant’s remarks, 101 rejection is not withdrawn. Applicant’s argument regarding the §101 is not found to be persuasive. Accordingly, the 101 rejection has been maintained. Claims 1, 8 and 14 also fails both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more Therefore, Claims 1, 8 and 14 do not recite patent eligible subject matter under 35 U.S.C. § 101. Therefore, claims 1-20 appear to be patent ineligible under 35 USC 101. Rejections Under 35 U.S.C. § 103 Applicant argues that: “Applicant respectfully submits that the combination of Yang and Li does not disclose or suggest the subject matter of claim 1, including at least "causing a data structure that includes a mapping from a first tensor to a second tensor to be generated.” Examiner respectfully disagree and submit that: Applicant’s arguments with respect to the newly added limitations have been considered but are moot because the arguments do not apply to the newly cited reference Liu et al. (US 2021/0117806) being used in the current rejection. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Under Step 2A, Prong 1, Claim 1 recites One or more processors, comprising: circuitry to , in response to an application programming interface (API) call cause a data structure that includes a “mapping from a first tensor to a second tensor to be generated” . The limitations of “mapping…” is a process that, under their broadest reasonable interpretation, covers performance of the limitation in the mind, but for the recitation of generic computer components. That is, other than reciting “in response to an application programming interface (API) call ” nothing in the claim element precludes the step from practically being performed in a human mind or with the aid of pen and paper. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, then it falls within the “Mental Processes” grouping of abstract ideas (concepts performed in the human mind including an observation, evaluation, judgment, and opinion). Under Prong 2, The judicial exception is not integrated into a practical application. The additional elements to “in response to an application programming interface (API) call cause a data structure… to be generated” , which “ to “cause ”, it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. (see MPEP 2106.05(f)). The additional elements “processors”,” circuitry”, “tensor”, “application programming interface (API) call” are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. (see MPEP 2106.05(f)). The claim is directed to an abstract idea. Step 2B: The claim does not include additional elements that are sufficient to amount significantly more than the judicial exception. The limitations “to “cause ”, amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. (see MPEP 2106.05(f)). Therefore, claim 1 as a whole does not amount to significantly more than the judicial exception. Consequently, claim 1 is not eligible. Further claim 2, recite the additional element “ wherein the data structure that includes the mapping to be generated is a tensor descriptor” is merely data gathering which the court have identified as WURC. See MPEP 2106.05(d). The additional element of “circuits are to perform the “processor” , “tensor, merely recite the generic computer or computer components for carrying out or applying the abstract idea. Accordingly, the additional elements do not amount to significantly more than the abstract idea, thus cannot provide an inventive concept. Further claim 3, recite the additional element “ the data structure that includes the mapping to be generated is a tensor descriptor, and the data structure also includes information that indicates a structure of a first tensor stored in a first memory of a graphics processing unit (GPU), and indicates a structure of a second tensor to be stored in a second memory of the GPU based, at least in part, on the mapping and the first tensor” is merely data gathering which the court have identified as WURC. See MPEP 2106.05(d). The additional element of “tensor,” memory”, “graphics processing unit (GPU)”, merely recite the generic computer or computer components for carrying out or applying the abstract idea. Accordingly, the additional elements do not amount to significantly more than the abstract idea, thus cannot provide an inventive concept. Further claim 4 recites additional elements “wherein the mapping is to be used to store data of the first tensor to be stored according to the mapping.”, is insignificant extra-solution activity (e.g. selecting a particular data source or type of data to be manipulated, insignificant application), which do not integrate a judicial exception into practical application. See MPEP 2106.05(d). The additional element of “to store data”, “tensor”, are merely recite the generic computer or computer components for carrying out or applying the abstract idea. Accordingly, these additional elements, do not integrate a judicial exception into practical application, do not amount to significantly more than the abstract idea, thus cannot provide an inventive concept. Further claim 5, recite additional elements “wherein the API is to receive as input information indicating a storage location in which to store the mapping.” , is insignificant extra-solution activity (e.g. selecting a particular data source or type of data to be manipulated, insignificant application), which do not integrate a judicial exception into practical application. See MPEP 2106.05(d). The additional element of “storage location in which to store”, merely recite the generic computer or computer components for carrying out or applying the abstract idea. Accordingly, these additional elements, do not integrate a judicial exception into practical application, do not amount to significantly more than the abstract idea, thus cannot provide an inventive concept. Further claim 6, recite additional elements “wherein the API is to receive as input a tensor data type”, is insignificant extra-solution activity (e.g. selecting a particular data source or type of data to be manipulated, insignificant application), which do not integrate a judicial exception into practical application. See MPEP 2106.05(d). The additional element of “tensor data type”, merely recite the generic computer or computer components for carrying out or applying the abstract idea. Accordingly, these additional elements, do not integrate a judicial exception into practical application, do not amount to significantly more than the abstract idea, thus cannot provide an inventive concept. Further claim 7, recite additional elements “wherein the API is to receive as input a tensor rank.”, are insignificant extra-solution activity (e.g. selecting a particular data source or type of data to be manipulated, insignificant application), which do not integrate a judicial exception into practical application. See MPEP 2106.05(d). The additional element of “ tensor rank” , merely recite the generic computer or computer components for carrying out or applying the abstract idea. Accordingly, these additional elements, do not integrate a judicial exception into practical application, do not amount to significantly more than the abstract idea, thus cannot provide an inventive concept. As to claims 8-20 Similar analysis as claims 1-7 applies to claims 8-20. Further, claims 8-20 recites computer components. The terms , “A system, comprising: one or more processors”, “A non-transitory computer-readable medium having stored thereon a set of instructions” , which are merely recitations of generic computing components (see MPEP §2106.05(f)) which does not integrate a judicial exception into practical application. These elements represent no more than mere instructions to apply the judicial exception on a computer. The “system” , “ non-transitory computer-readable medium”, “ processors”, “ instructions” are all mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). This does not integrate into a practical application, NOR does it provide significantly more. Accordingly, the additional elements do not amount to significantly more than the abstract idea, do not integrate a judicial exception into practical application, thus cannot provide an inventive concept. For at least these reasons, claims 1-20 is not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-7, 8-17 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over YANG et al. (US 20230315410, YANG hereinafter) in view of Liu et al. (US 2021/0117806, Liu hereinafter). As to claim 1, YANG teaches one or more processors, comprising: Circuitry (e.g., see FIG. 1, para [0084] “ a CGR processor 110) to an application programming interface (API) call (e.g., para [0159] … query arguments in an API “API functions”, “ input/output tensors in an application” for “App 602 can comprise an application “ in para 160, see FIG. 6. Thus, the “query arguments in an API “ coupled with the “API functions” include API call ) cause a mapping from a first tensor, a second tensor (e.g., see FIG. 6, para 165, “ mapper 620 can comprise a component or function of compiler 600 to determine mapping decisions to map operations and data of app 602 to CGR hardware resources of a CGRS to execute app 602” for “ Using a an API of a search space, a mapper can, for example, identify operators, and their associated input/output tensors, that can form such a pipeline (or, pipelines)” in para 156 . Thus, the “input/output tensors, that can form such a pipeline “ include a first tensor, a second tensor ). YANG does not explicitly teach in response to the API call, cause a data structure that includes a mapping from the first tensor to the second tensor to be generated. Liu teaches a data structure (e.g., “308”, FIGs. 4A/B) that includes a mapping from the first tensor to the second tensor to be generated (e.g., para 30 and 31, “a generic tensor descriptor “, “ The generic tensor descriptor facilitates proper mapping of elements of a multi-index to elements of generic tensor raw data”, “a generic tensor T is a n-dimensional Cartesian grid structure” for Para 38” A second generic tenor descriptor B “, “ generic tensor descriptor B can be thought of as being constructed using generic tensor descriptor A,.”, “transformation function” here, which transforms generic tensor descriptor A into B”, “ transformation functions can be chained and applied to an existing generic tensor descriptor to construct a series of new generic tensor descriptors” in Para 38 and 39, 47 Thus the “a n-dimensional Cartesian grid structure” coupled with “tensor descriptors” represent the data structure , the “generic tensor descriptor B can be thought of as being constructed using generic tensor descriptor A” coupled with “transformation function” here, which transforms generic tensor descriptor A into B” and “proper mapping of elements of a multi-index to elements of generic tensor raw data”, therefore a data structure that includes a mapping from the first tensor to the second tensor to be generated), in response an application programming interface (API) call (e.g., para 26, “in response to an invocation such as an API call). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of YANG with those of Liu because both references are directed to related systems addressing similar technical problems within the same field and seek to improve system performance, reliability, and efficiency. YANG et al. disclose an API to cause a mapping from a first tensor, a second tensor while Liu et al. teach in response the API cause a data structure that includes a mapping from the first tensor to the second tensor to be generated. Incorporating the teachings of Liu et al. into the system of YANG et al. would have been a predictable and logical modification, yielding improved operational robustness and efficiency without requiring undue experimentation. Such a combination would merely involve the substitution or integration of known elements performing their established functions, as taught by Liu et al., into the system of YANG et al., consistent with design incentives and market demands for improved performance and scalability. Moreover, Liu et al. explicitly recognize benefits to “developing faster and more efficient hardware and software stacks for high performance computing for neural networks” (see Liu, para 2) . —that would naturally be desirable in the system of YANG et al. Accordingly, to one of ordinary skill in the art would have had a reasonable expectation of success in combining YANG et al. with Liu et al., and the combination represents no more than the predictable use of prior art elements according to their known functions. As to claim 2, YANG does not explicitly teach the data structure that includes the mapping to be generated is a tensor descriptor. However, Liu teaches the data structure that includes the mapping to be generated is a tensor descriptor (e.g., “tensor descriptors”, FIG. 4B, see rejection of claim 1 above) . Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the method of YANG and LI by adopting the teachings of Liu in order to “ developing faster and more efficient hardware and software stacks for high performance computing for neural networks” (see Liu, para 2) . As to claim 3, YANG does not explicitly teach further the data structure that includes the mapping to be generated is a tensor descriptor, and the data structure also includes information that indicates a structure of a first tensor stored in a first memory of a graphics processing unit (GPU), and indicates a structure of a second tensor to be stored in a second memory of the GPU based, at least in part, on the mapping and the first tensor. However, Liu teaches teach the data structure that includes the mapping to be generated is a tensor descriptor, and the data structure also includes information that indicates a structure of a first tensor stored in a first memory of a graphics processing unit (GPU), and indicates a structure of a second tensor to be stored in a second memory of the GPU based, at least in part, on the mapping and the first tensor (e.g., see FIG. 2, para 26, “ software code executing on a programmable processor (e.g., a CPU or a GPU), a compiler, hardware that performs the operations in response to an invocation such as an API call” and “ the slice operator “slices out” a given portion of a tensor to make a new generic tensor descriptor”, “the data involved is stored in memory in a particular data format. Specifically, the inputs, weights, and outputs are stored as tensors having specific data formats”, “a generic tensor descriptor also provides a means to calculate the memory “offset” of a tensor raw data that is associated with a multi-index. Thus, a generic tensor descriptor indicates the manner in which a multi-index maps to generic tensor raw data as stored in memory. The generic tensor descriptor facilitates proper mapping of elements of a multi-index to elements of generic tensor raw data. Indicating the generic tensor type also indicates the manner in which the multi-index of the generic tensor descriptor maps to the generic tensor raw data. A normal tensor is described elsewhere herein. With a generic tensor stored in memory, the point of origin corresponds to the base address of the generic tensor.” In para 27-31, see FIG. 2). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the method of YANG and LI by adopting the teachings of Liu in order to “ developing faster and more efficient hardware and software stacks for high performance computing for neural networks” (see Liu, para 2) . As to claim 4, YANG teaches wherein the mapping is to be used to store data of the first tensor to be stored according to the mapping (e.g., para 197, “ A mapper can determine tiling decisions based on, in another example, whether or not output tensor data must be buffered in stage buffers, and/or remote memories, between pipeline stages, and the type/sizes of CGR memories to operate as stage buffers.” And “a number of operators in a graph that can form a pipeline; and/or transfers of tensor data in and/or out of memories” in para 202). As to claim 5, YANG teaches wherein the API is to receive as input information indicating a storage location in which to store the mapping (e.g., para 198, “As results tensors must sometimes be materialized in a stage buffers between processors implementing operators of a graph, a mapper can determine tiling decisions based upon attributes of particular memories, or a number of memories, utilized as stage buffers to store the input/output tensor data. “, Thus, the “attributes of particular memories” includes input information indicating a storage location) . As to claim 6, YANG teaches wherein the API is to receive as input a tensor data type (0156] … A search space can comprise attributes of operators, input/output tensors, such as operator type, dimensions of input/output, size (e.g., number of elements) of input/output dimensions, and so forth. Using a an API of a search space, a mapper can, for example, identify operators, and their associated input/output tensors, that can form such a pipeline (or, pipelines).Thus, “operator type” for “input/output tensors “ represent the tensor data type ) . As to claim 8, see rejection of claim 1 above. YANG teaches further a system, comprising: one or more processors (e.g., see FIG. 1). As to claims 9-12, see rejection of claims 2-5 above. As to claim 13, YANG teaches wherein the API is to receive as input a plurality of characteristics of the first tensor (e.g., para (0156] … A search space can comprise attributes of operators, input/output tensors, such as operator type, dimensions of input/output, size (e.g., number of elements) of input/output dimensions, and so forth. Using a an API of a search space, a mapper can, for example, identify operators, and their associated input/output tensors, that can form such a pipeline (or, pipelines).Thus, “operator type, dimensions of input/output, size (e.g., number of elements) of input/output dimensions, and so forth” for “input/output tensors “ represent the plurality of characteristics) . As to claim 14, see rejection of claim 1 above. As to claims 15, see rejection of claim 2 above. As to claim 16, see rejection of claim 13 above. As to claim 17, see rejection of claim 5 above. As to claim 19, YANG teaches wherein the API is to receive as input information indicating how the first tensor is laid out in memory(e.g., para 197, “ A mapper can determine tiling decisions based on, in another example, whether or not output tensor data must be buffered in stage buffers, and/or remote memories, between pipeline stages, and the type/sizes of CGR memories to operate as stage buffers.” And “a number of operators in a graph that can form a pipeline; and/or transfers of tensor data in and/or out of memories” in para 202). . As to claim 20, see rejection of claim 1 above. Yank teaches further non-transitory computer-readable medium having stored thereon a set of instructions, which if performed by one or more processors, cause the one or more processors to at least perform the method (e.g., see FIG. 11, para [0085] Host 180 can be, or can include, a computer such as will be further described with reference to FIG. 11. Host 180 can execute runtime processes, as further referenced herein, and can also be used to run computer programs, such as a CGRS compiler. In some implementations, the compiler can run on a computer that is similar to the computer described with reference to FIG. 11, but separate from host 180. [0086] CGR processor 110 can accomplish computational tasks by executing a configuration file (for example, a PEF file). For the purposes of this description, a configuration file corresponds to a dataflow graph, or a translation of a dataflow graph, and can further include initialization data. A compiler compiles the high-level program to provide the configuration file. In some implementations described herein, a CGR array is configured by programming one or more configuration stores with all or parts of the configuration file. A single configuration store can be at the level of the CGR processor or the CGR array, or a CGR unit can include an individual configuration store. The configuration file can include configuration data for the CGR array and CGR units in the CGR array, and link the computation graph to the CGR array. Execution of the configuration file by CGR processor 110 causes the CGR array (s) to implement the user algorithms and functions in the dataflow graph. Thus, non-transitory computer-readable medium having stored thereon a set of instructions, which if performed by one or more processors, cause the one or more processors to at least perform the method would have been inherent). Claim(s) 7 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over YANG et al. (US 20230315410, YANG hereinafter) in view of Liu et al. (US 2021/0117806, Liu hereinafter), as applied to claims 1 and 14 above, and further in view of LI et al. (US 20220164642, LI hereinafter). As to claim 7, Yank and Lui does not teach wherein the API is to receive as input a tensor rank . However, LI teaches wherein the APIs to receive as input a tensor rank (e.g. , para [0010] In the case of a multiplication of two tensors, where the first tensor has rank p and the second tensor has rank q, the shape of the first tensor is [N.sub.1, N.sub.2, . . . , N.sub.p−1, M], and the shape of the second tensor is [M, W.sub.1, . . . , W.sub.q−1]. N.sub.1, N.sub.2, . . . , N.sub.p−1, M, W.sub.1, , . . . , W.sub.q−1 are each a positive integer greater or equal to one. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the method of YANG and Liu by adopting the teachings of LI to “ optimizing the reusability of data stored on local memories, the total energy consumption can be greatly reduced. ” (see LI, para 38) . As to claim 18, YANG does not explicitly teach wherein the mapping indicates how to obtain data of the first tensor from global memory of a graphics processing unit (GPU), to transform the data of the first tensor to obtain the second tensor, and a location in shared memory of the GPU in which to store the second tensor , However, LI teaches wherein the mapping indicates how to obtain data of the first tensor from global memory of a graphics processing unit (GPU), to transform the data of the first tensor to obtain the second tensor, and a location in shared memory of the GPU in which to store the second tensor (e.g., see FIG. 2D, para 5, “tensor processing units (TPUs)” and para [0037] Both convolution and linear combination can be reduced mathematically to matrix multiplications between a weight matrix W ∈[AltContent: rect].sup.M×M and a batch of input vectors {x.sub.1, . . . x.sub.N}∈[AltContent: rect].sup.M×N. To improve matrix computation speeds in neural networks, data transformation and thread parallelization schemes are implemented in CPUs and GPUs [32], [33].. …a TPU shown in FIG. 2D with increased parallelization and improved energy efficiency in memory access. Thus, the “data transformation and thread parallelization schemes are implemented in CPUs and GPUs” for the “tensor processing units (TPUs)” include wherein the mapping indicates how to obtain data of the first tensor from global memory of a graphics processing unit (GPU), to transform the data of the first tensor to obtain the second tensor, and a location in shared memory of the GPU in which to store the second tensor). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the method of YANG and Liu by adopting the teachings of LI to “ optimizing the reusability of data stored on local memories, the total energy consumption can be greatly reduced. ” (see LI, para 38) . Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDOU K SEYE whose telephone number is (571)270-1062. The examiner can normally be reached M-F 9-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Vital can be reached at 5712724215. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ABDOU K SEYE/Examiner, Art Unit 2198 /PIERRE VITAL/Supervisory Patent Examiner, Art Unit 2198
Read full office action

Prosecution Timeline

Dec 21, 2022
Application Filed
Jun 15, 2025
Non-Final Rejection — §101, §103
Aug 28, 2025
Interview Requested
Sep 04, 2025
Applicant Interview (Telephonic)
Sep 04, 2025
Examiner Interview Summary
Dec 18, 2025
Response Filed
Mar 17, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598527
Real-Time Any-G SON
2y 5m to grant Granted Apr 07, 2026
Patent 12587456
MACHINE LEARNING BASED EVENT MONITORING
2y 5m to grant Granted Mar 24, 2026
Patent 12585512
CUSTOMIZED SOCKET APPLICATION PROGRAMMING INTERFACE FUNCTIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12541410
THREAD SPECIALIZATION FOR COLLABORATIVE DATA TRANSFER AND COMPUTATION
2y 5m to grant Granted Feb 03, 2026
Patent 12530245
CONTAINER IMAGE TOOLING STORAGE MIGRATION
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+27.5%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 583 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month