Prosecution Insights
Last updated: April 19, 2026
Application No. 18/197,193

Data Analytical Engine System and Method

Non-Final OA §103
Filed
May 15, 2023
Examiner
CHEUNG, HUBERT G
Art Unit
2152
Tech Center
2100 — Computer Architecture & Software
Assignee
Madhumita Bhattacharyya
OA Round
5 (Non-Final)
63%
Grant Probability
Moderate
5-6
OA Rounds
4y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
246 granted / 390 resolved
+8.1% vs TC avg
Strong +49% interview lift
Without
With
+49.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
23 currently pending
Career history
413
Total Applications
across all art units

Statute-Specific Performance

§101
11.6%
-28.4% vs TC avg
§103
47.9%
+7.9% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 390 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office action is in response to the RCE, filed on 10/10/2025, and the amendments, arguments and remarks, filed on 10/1/2025, in which claim(s) 1, 4, 5, 8, 11, 12, 15, 18 and 19 is/are presented for further examination. Claim(s) 1, 8 and 15 has/have been amended. Claim(s) 2, 3, 6, 7, 9, 10, 13, 14, 16, 17 and 20 has/have been cancelled or has/have been previously cancelled. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on 10/1/2025 has been entered. Response to Amendment Applicant’s amendment(s) to claim(s) 1, 8 and 15 has/have been accepted. The examiner thanks applicant’s representative for pointing out where s/he believes there is support for the amendment(s). Response to Arguments Applicant’s arguments with respect to claim(s) 1, 4, 5, 8, 11, 12, 15, 18 and 19, filed on 10/1/2025, have been fully considered but they are not persuasive. Applicant’s arguments with respect to the rejection(s) of claim(s) 1, 4, 5, 8, 11, 12, 15, 18 and 19, under 35 U.S.C. 103, see the middle of page 8 to page 11 of applicant’s remarks, filed on 10/1/2025, have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Objections Claim(s) 1 is/are objected to because of the following informalities: in line 19, “eliminating the need”, should be corrected to “eliminating a need”; and in lines 27-31, “wherein the customizable library of machine learning algorithms comprise at least one of one or more: artificial intelligence (Al) models, deep learning (DL) algorithms, and natural language processing (NLP) algorithms, and wherein these AI models, deep learning (DL) algorithms, and natural language processing (NLP) algorithms;” should be corrected to “wherein the customizable library of machine learning algorithms comprise at least one of one or more: artificial intelligence (Al) models, deep learning (DL) algorithms, and natural language processing (NLP) algorithms” (i.e., the “wherein” statement does not further define the “AI models”, “DL algorithms” and “NLP algorithms”. Appropriate correction is required. Claim(s) 8 is/are objected to because of the following informalities: in line 17, “eliminating the need”, should be corrected to “eliminating a need”; and in lines 25-29, “wherein the customizable library of machine learning algorithms comprise at least one of one or more: artificial intelligence (Al) models, deep learning (DL) algorithms, and natural language processing (NLP) algorithms, and wherein these AI models, deep learning (DL) algorithms, and natural language processing (NLP) algorithms;” should be corrected to “wherein the customizable library of machine learning algorithms comprise at least one of one or more: artificial intelligence (Al) models, deep learning (DL) algorithms, and natural language processing (NLP) algorithms” (i.e., the “wherein” statement does not further define the “AI models”, “DL algorithms” and “NLP algorithms”. Appropriate correction is required. Claim(s) 15 is/are objected to because of the following informalities: in line 18, “eliminating the need”, should be corrected to “eliminating a need”; and in lines 26-30, “wherein the customizable library of machine learning algorithms comprise at least one of one or more: artificial intelligence (Al) models, deep learning (DL) algorithms, and natural language processing (NLP) algorithms, and wherein these AI models, deep learning (DL) algorithms, and natural language processing (NLP) algorithms;” should be corrected to “wherein the customizable library of machine learning algorithms comprise at least one of one or more: artificial intelligence (Al) models, deep learning (DL) algorithms, and natural language processing (NLP) algorithms” (i.e., the “wherein” statement does not further define the “AI models”, “DL algorithms” and “NLP algorithms”. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 4, 5, 8, 11, 12, 15, 18 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Srinivasa et al., US 2019/0163679 A1 (hereinafter “Srinivasa”) in view of Brebner, US 2020/0007556 A1 (hereinafter “Brebner”) in further view of Wu et al., “New Algorithm for Computing Cube on Very Large Compressed Data Sets”, IEEE Transactions on Knowledge & Data Engineering, Vol. 18, No. 12, December 2006 (hereinafter “Wu”) in further view of Manda et al., US 2023/0153641 A1 (hereinafter “Manda”). Claims 1, 8 and 15 Srinivasa discloses an data analytical system for one or more industry sectors (Note: “for one or more industry sectors” is interpreted as intended use) comprising: a non-transitory memory storing computer-readable instructions (Srinivasa, [0136], see permanent non-transitory mass storage device; and Srinivasa, [0139], see computer readable program instructions); and at least one hardware processor (Srinivasa, [0139], see processor) to execute the instructions to: receive database authentication information from a client computing device (Srinivasa, [0070], see authentication options for accessing different data sources with different security requirements and see associated authentication procedures; and Srinivasa, [0071], see the user may be asked for authentication credentials to allow them to access the data sources including the information in which they are interested and the data sources 222 may have security settings associated with it so that the user interface on the client device 202 may be configured to limit the view), continually obtain data from a first data source using the database authentication information, the first data source storing data having a first representation of the data (Srinivasa, [0078], see “The data source is then accessed at 310 and queried using the language appropriate for the specific data source based on the mapped integrity constraints of the data source specific schema [i.e., interpreted as the “first representation of the data”] in the context of the cohesive query. … In some aspects, a security protocol may be required or initiated before granting access to the information in the data source.”; and Srinivasa, [0079], see “The information is then extracted from the data source at 312. In some aspects, the information is extracted and stored in a database. The stored information may be updated manually or automatically as additional records of interest are added to the data source at 312. …”); continually convert and store the data in a second data source in real-time as the data is received in the first data source, the second data source having a second representation of the data that is different from the first representation of the data (Srinivasa, [0079], see “The information is then extracted from the data source at 312. In some aspects, the information is extracted and stored in a database. The stored information may be updated manually or automatically as additional records of interest are added to the data source at 312. The extracted data is returned, for example, to the integration module or data integration schema module of FIG. 2, which maps the data in reverse, integrating data from different sources by reassigning the field or fields used by the data source to those of the modules [i.e., where the field(s) of the module(s) is being interpreted as “second representation of the data that is different from the first representation of the data”] and combining the resulting information from disparate data sources and data types into a cohesive whole at 314 attached to a distributed data sets. Steps included at 322 may be performed serially, in parallel, or in a distributed manner on one or more data sources. …”; and Srinivasa, [0024], see “… The disclosed methods allow data to remain in silos and thus the data being accessed by a researcher or clinician includes the most up to date records and does not change the data structure of the data sources. . . .”, where the most up to date records requires continuous updating), receive a selection of at least one machine learning algorithm from a customizable library of machine learning algorithms, including a subset of favorite machine learning algorithms, and add at least one of the favorite machine learning algorithms add-in 232 or the additional functionality may provide various authentication options for accessing different data sources with different security requirements and may provide various filtering and display options for control of data presented to the user. Further, add-in 232 may perform aggregate queries in a distributed environment. … Add-in 232 may provide specific functions that utilize various parameters to manage data from specified data sources and to handle different data source and associated authentication procedures and data storage formats.”, where through the add-in(s), discloses configuration of allowing what data can be viewed by specific users [i.e., customizable library”]) to perform at least one operation on the data in the second data source (Srinivasa, [0098], see “With regard to FIGS. 8A-8B computations/analytics 845 may include machine learning, which may include deep learning. Machine learning methods may include but are not limited to linear regression, logistic regression, elastic nets, singular value decomposition, restricted Boltzmann machines, Markov chains, latent Dirichlet allocation, association rules, gradient boosted decision trees, random forests, clustering techniques, and/or matrix factorization [i.e., interpreted as the “customizable library of machine learning algorithms”]. Machine learning may be utilized to uncover medically-relevant insights via learning from relationships and trends in the data included in data pool 878 (FIG. 8A), or data pool 896 (FIG. 8B) [i.e., interpreted as the “at least one operation on the data in the second data source”, taking the broadest reasonable interpretation of “operation” because it has not been defined in the claim].”, where by being able to select which machine learning method to use makes it customizable; Srinivasa, [0070] discloses “…, the add-in 232 or additional functionality may provide various functions which can directly interface with various specified data sources to import, format and update data provided by the integration server 224. For example, the add-in 232 or the additional functionality may provide various authentication options for accessing different data sources with different security requirements and may provide various filtering and display options for control of data presented to the user. Further, add-in 232 may perform aggregate queries in a distributed environment. … Add-in 232 may provide specific functions that utilize various parameters to manage data from specified data sources and to handle different data source and associated authentication procedures and data storage formats.” Through the add-in(s), there is the ability to provide options to select, such as selecting the machine learning algorithm, and the ability to register the selection of the option; and Srinivasa, [0070] discloses configuration allowing what data can be viewed by specific users (i.e., a customizable library)), receive a request to create a visualization of the data and generate a visualization of the data using the second representation of the data based on the at least one machine learning algorithm (Srinivasa, [0070], see “… The add-in 232 may query distributed data partitions, perform a specified analysis report within the distributed environment, and send the results back for visualization on the client device 202. …”; Srinivasa, [0135], see “Display 1410 may be used to present a visual representation of data held within memory 1418 or database 228. As the herein described methods and processes change the data held in the memory 1418 or database 228, the state of the information displayed may also change. For example, display 1410 may be used to present a visual representation of data using, for example, a “Graphics processing unit” (GPU), a processing unit that comprises a programmable logic chip (processor) specialized for display functions. The GPU may render images, animations, and video for a computer screen. …”; and Srinivasa, [0070], see “… , in the case of genomics, a query for a gene name must be translated into genomic coordinates which are then mapped to the data source's positional and locational information. The data integration module may store metadata information about the data source instance and how the partitions map to genomic locations as defined by the reference genome of the original VCF files which may be utilized by the add-in 232 or the integration server 224 to create cohesive queries and query segments.”, where the data is queried suing the mapping representations of the data); transmit the visualization and of the data to the client computing device (Srinivasa, [0070], see “… The add-in 232 may query distributed data partitions, perform a specified analysis report within the distributed environment, and send the results back for visualization on the client device 202. …”). Srinivasa does not appear to explicitly disclose an artificial intelligence (AI) based data analytical system; wherein upon receiving the database authentication information one or more security configurations are initialized, including the one or more security configurations associated with Lightweight Directory Access Protocol (LDAP), custom credentials, and external security options; wherein the second representation of the data comprises at least one of a hypercube and an offset cube, wherein the offset cube is used to manage big data by performing aggregation on-the-fly during data ingestion into a connected database, thereby eliminating the need for regeneration or reloading of cubes upon arrival of new data, wherein the second representation of the data uses directed acyclic graphs (DAGs), dynamic aggregation, in-memory access, and use of at least one of: the hypercube and the offset cube data organization; adding to a template; wherein the library of machine learning algorithms comprise at least one of one or more: artificial intelligence (AI) models, deep learning (DL) algorithms, and natural language processing (NLP) algorithms, and wherein these AI models, deep learning (DL) algorithms, and natural language processing (NLP) algorithms; determine recommendations for created visualizations of the data through a generative AI enabled recommendation engine to provide interpretation of complicated statistical outputs; transmit recommendations. Brebner discloses an artificial intelligence (AI) based data analytical system (Brebner, [0499], see the generative content system 1100 is configured to structure and generate content that is used in machine learning and/or artificial intelligence tasks. In some embodiments, the generative content system 1100 may collect, structure, and generate training data that is used to train machine learned models used in various types of systems. In these embodiments, the generative content system 1100 generates training data that supplements collected training data collected from one or more data sources 1104. The generative content system 1100 may then train the one or more models based on the combination of the collected data and the generated data); wherein upon receiving the database authentication information one or more security configurations are initialized, including the one or more security configurations associated with Lightweight Directory Access Protocol (LDAP), custom credentials, and external security options (Brebner, [0392], see the authentication module 286 authenticates client application instances. …, the authentication module 286 may utilize a third party security service (e.g., an identity management service such as LDAP) to authenticate a user of a client application. Alternatively, a user may simply provide a user ID (e.g., an email address) and a password [i.e., “custom credentials”] via a client device 214, and the authentication module 286 may use the security service to authenticate the user based on the user ID, the password. In response to authenticating a user, either internally or externally [i.e., “external security options”] (which may be an administrator-provided configuration decision), the security service or the authentication module may issue a session token to the combination of the user and the client application instance. A session token may be a token that is used to enable communication between the client application instance and the server instance 204 during a communication session. The session token may or may not have an expiry (e.g., an indication of how long the communication session lasts). In response to issuing the session token, the authentication module 286 may establish a communication session over which the client application instance may transmit to and receive data from the server instance 204. For example, the authentication module 286 may allow the client application instance to subscribe to a socket-based channel, whereby the client application instance may communicate with the server instance via the socket-based channel. Once the session token is issued to the client application instance, and the communication session is established, the client application instance may communicate resource calls to the server instance 204. … , one or more behaviors and/or properties of the authentication module 286 may be configured by an administrator using configuration statements that include configuration parameters relating to: an external authentication service to use to authenticate a user [i.e., “external security options”], configurations to define caching of users from external systems [i.e., “external security options”], rules for the expiry of session tokens, and the like. The configuration parameters may be defined in respective scene tree objects of a server scene tree. At run time, the scene tree object manager may instantiate an instance of the authentication module 286 using the configuration parameters defined in the server scene objects, thereby configuring the instance in accordance with the desired behaviors and/or properties); adding to a template (Brebner, [0355], see upload a template for generating a particular file, where the template includes rules for using the template and/or mappings that define where particular types of data are inserted into the template); determine recommendations for created visualizations of the data through a generative AI enabled recommendation engine to provide interpretation of complicated statistical outputs (Brebner, [0498], see a literal representation comprised of visual content may be displayed as a 3D environment via a 2D screen or virtual reality display device, whereby a user may “traverse” the 3D environment and view various objects within the 3D environment from different points of view and at different levels of detail. In another example, a literal representation may be comprised of data points, such that the literal representation may be used for analytics and machine-learning tasks (e.g., training a machine learned model). The method may be applied to generate additional or alternative types of literal representations that may be used for any suitable applications; and Brebner, [0499], see the generative content system 1100 is configured to structure and generate content that is used in machine learning and/or artificial intelligence tasks. In some embodiments, the generative content system 1100 may collect, structure, and generate training data that is used to train machine learned models used in various types of systems. In these embodiments, the generative content system 1100 generates training data that supplements collected training data collected from one or more data sources 1104. The generative content system 1100 may then train the one or more models based on the combination of the collected data and the generated data; and see also below, Wu, page 1669, 3. Heuristic Algorithm for Generating Cube Computation Plan, Definition 2, see cube lattice of Cube(R, F) represented by a directed acyclic graph); transmit recommendations (Brebner, [0498], see a literal representation comprised of visual content may be displayed as a 3D environment via a 2D screen or virtual reality display device, whereby a user may “traverse” the 3D environment and view various objects within the 3D environment from different points of view and at different levels of detail. In another example, a literal representation may be comprised of data points, such that the literal representation may be used for analytics and machine-learning tasks (e.g., training a machine learned model). The method may be applied to generate additional or alternative types of literal representations that may be used for any suitable applications, see how the 3D environment is displayed, where it is first sent to the display before it is shown). Srinivasa and Brebner are analogous art because they are from the same field of endeavor, such as combining and processing data. It would have been obvious to one of ordinary skill in the art before the effective filing of the invention, having the teachings of Srinivasa and Brebner before him/her, to modify the database of Srinivasa to include the generative artificial intelligence of Brebner because it would allow learning patterns and using that knowledge to solve problems. The suggestion/motivation for doing so would have been analyze data to optimize management and improve results. Therefore, it would have been obvious to combine Brebner with Srinivasa to obtain the invention as specified in the instant claim(s). The combination of Srinivasa and Brebner does not appear to explicitly disclose wherein the second representation of the data comprises at least one of a hypercube and an offset cube, wherein the offset cube is used to manage big data by performing aggregation on-the-fly during data ingestion into a connected database, thereby eliminating the need for regeneration or reloading of cubes upon arrival of new data, wherein the second representation of the data uses directed acyclic graphs (DAGs), dynamic aggregation, in-memory access, and use of at least one of: the hypercube and the offset cube data organization; wherein the library of machine learning algorithms comprise at least one of one or more: artificial intelligence (AI) models, deep learning (DL) algorithms, and natural language processing (NLP) algorithms, and wherein these AI models, deep learning (DL) algorithms, and natural language processing (NLP) algorithms. Wu discloses wherein the second representation of the data comprises at least one of a hypercube and an offset cube (Wu, page 1667, Introduction, 1st 2 full paragraphs, see data warehouse operations using an OLAP cube, where an OLAP cube is often referred to as a “hypercube”; and Wu, page 1667, Introduction, 3rd and 4th full paragraphs, see multidimensional data spaces in cubes represented in a “chunk-offset” format [i.e., interpreted to be an “offset cube”]), wherein the offset cube is used to manage big data (Wu, Abstract, see improving the performance of data warehouses, where data warehouses are used for big data), wherein the second representation of the data uses directed acyclic graphs (DAGs), dynamic aggregation, in-memory access (Wu, page 1669, 3. Heuristic Algorithm for Generating Cube Computation Plan, Definition 2, see cube lattice of Cube(R, F) represented by a directed acyclic graph), and use of at least one of: the hypercube and the offset cube data organization (Wu, page 1667, Introduction, 1st 2 full paragraphs, see data warehouse operations using an OLAP cube, where an OLAP cube is often referred to as a “hypercube”; and Wu, page 1667, Introduction, 3rd and 4th full paragraphs, see multidimensional data spaces in cubes represented in a “chunk-offset” format [i.e., interpreted to be an “offset cube”]). Srinivasa, Brebner and Wu are analogous art because they are from the same field of endeavor, such as combining and processing data. It would have been obvious to one of ordinary skill in the art before the effective filing of the invention, having the teachings of Srinivasa, Brebner and Wu before him/her, to modify the generative artificial intelligence database of the combination of Srinivasa and Brebner to include the offset cube of Wu because it would allow saving huge storage space. The suggestion/motivation for doing so would have been to reduce the storage cost and computing costs for cubes, see Wu, page 1667, Introduction, 2nd full paragraph. Therefore, it would have been obvious to combine Wu with the combination of Srinivasa and Brebner to obtain the invention as specified in the instant claim(s). The combination of Srinivasa, Brebner and Wu does not appear to explicitly disclose by performing aggregation on-the-fly during data ingestion into a connected database, thereby eliminating the need for regeneration or reloading of cubes upon arrival of new data; wherein the library of machine learning algorithms comprise at least one of one or more: artificial intelligence (AI) models, deep learning (DL) algorithms, and natural language processing (NLP) algorithms, and wherein these AI models, deep learning (DL) algorithms, and natural language processing (NLP) algorithms. Manda discloses by performing aggregation on-the-fly during data ingestion into a connected database, thereby eliminating the need for regeneration or reloading of cubes upon arrival of new data (Manda, [0027], see the ingestion engine 130 can include event-driven programming components (e.g., one or more event listeners) that can coordinate the allocation of processing resources at runtime [i.e., “on-the-fly”] based on the size of the received input item submissions and/or other suitable parameters); wherein the library of machine learning algorithms comprise at least one of one or more: artificial intelligence (AI) models, deep learning (DL) algorithms, and natural language processing (NLP) algorithms, and wherein these AI models, deep learning (DL) algorithms, and natural language processing (NLP) algorithms (Manda, [0019], see the machine learning platform can provide the generated optimized model input to one or more machine learning models structured to perform machine learning/artificial intelligence operations (ML/AI) on the input documents; and Manda, [0018], see the optimized model input can be generated using a pre-processing machine learning model, which can be or include a convolutional neural network (CNN), a deep learning (DL) model, a translational model, a natural language processing (NLP) model, a computer vision-based model, and/or the like). Srinivasa, Brebner, Wu and Manda are analogous art because they are from the same field of endeavor, such as combining and processing data. It would have been obvious to one of ordinary skill in the art before the effective filing of the invention, having the teachings of Srinivasa, Brebner, Wu and Manda before him/her, to modify the generative artificial intelligence database in an offset cube of the combination of Srinivasa and Brebner to include the data ingestion of Wu because it would allow the inclusion of data in a format that is not capable of directly being processed by a machine learning models. The suggestion/motivation for doing so would have been to solve the technical problem of using machine learning models on unstructured data by pre-processing unstructured data in a manner that optimizes inputs to machine learning models and standardizes data attributes across a diverse set of inputs, see Manda, [0015]. Therefore, it would have been obvious to combine Manda with the combination of Srinivasa, Brebner and Wu to obtain the invention as specified in the instant claim(s). Claim(s) 8 and 15 recite(s) similar limitations to claim 1 and is/are rejected under the same rationale. With respect to claim 15, Srinivasa discloses a non-transitory computer-readable storage medium, having instructions stored thereon (Srinivasa, [0136], see permanent non-transitory mass storage device; and Srinivasa, [0139], see computer readable program instructions). Claims 4, 11 and 18 With respect to claims 4, 11 and 18, the combination of Srinivasa, Brebner and Wu discloses wherein the database authentication information comprises at least one of a database type, a name, host information, a port number, a database name, a database username, and a database password (Srinivasa, [0071], see registering a data source by providing a path or address to the data source and security settings or other protocols and the user is asked for authentication credentials to allow them access to the data sources). Claims 5, 12 and 19 With respect to claims 5, 12 and 19, the combination of Srinivasa, Brebner and Wu discloses wherein the first data source comprises at least one of one or more files, one or more databases (Srinivasa, [0076], see databases disclosed), one or more data warehouses, and one or more relational database management systems (RDBMS) (Srinivasa, [0075], see relational database; and Srinivasa, [0123], see retrieved from RDB [i.e., relational database, see Srinivasa, [0121]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. – Baessler et al., 2023/0169041 for analyzing deduplication data blocks associated with unstructured documents; and – Kazmi et al., 2023/0259499 for user generated tag collection system. Point of Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUBERT G CHEUNG whose telephone number is (571) 270-1396. The examiner can normally be reached M-R 8:00A-5:00P EST; alt. F 8:00A-4:00P EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Neveen Abel-Jalil can be reached at (571) 270-0474. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Examiner: Hubert Cheung /Hubert Cheung/Assistant Examiner, Art Unit 2152Date: February 19, 2026 /NEVEEN ABEL JALIL/Supervisory Patent Examiner, Art Unit 2152
Read full office action

Prosecution Timeline

May 15, 2023
Application Filed
Apr 05, 2024
Non-Final Rejection — §103
Jul 02, 2024
Response Filed
Sep 24, 2024
Final Rejection — §103
Nov 13, 2024
Interview Requested
Nov 20, 2024
Response after Non-Final Action
Nov 20, 2024
Examiner Interview Summary
Dec 05, 2024
Applicant Interview (Telephonic)
Dec 05, 2024
Response after Non-Final Action
Dec 17, 2024
Request for Continued Examination
Dec 30, 2024
Response after Non-Final Action
Mar 20, 2025
Non-Final Rejection — §103
Jun 26, 2025
Response Filed
Aug 15, 2025
Final Rejection — §103
Sep 10, 2025
Interview Requested
Sep 16, 2025
Examiner Interview Summary
Oct 01, 2025
Response after Non-Final Action
Oct 10, 2025
Request for Continued Examination
Oct 16, 2025
Response after Non-Final Action
Feb 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596674
A SYSTEM FOR DATA ARCHIVAL IN A BLOCKCHAIN NETWORK AND A METHOD THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12591611
EFFICIENT ACCESS MARKING APPROACH FOR EFFICIENT RETRIEVAL OF DOCUMENT ACCESS DATA
2y 5m to grant Granted Mar 31, 2026
Patent 12585731
APPARATUS AND METHODS FOR DETERMINING A PROBABILITY DATUM
2y 5m to grant Granted Mar 24, 2026
Patent 12561306
SYSTEMS AND METHODS FOR OPTIMIZING DATA PROCESSING IN A DISTRIBUTED COMPUTING ENVIRONMENT
2y 5m to grant Granted Feb 24, 2026
Patent 12547594
SPATIAL-TEMPORAL STORAGE
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
63%
Grant Probability
99%
With Interview (+49.3%)
4y 6m
Median Time to Grant
High
PTA Risk
Based on 390 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month