DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office action is in response to the amendments, arguments and remarks, filed on 8/6/2025, in which claim(s) 1, 3-10 and 12-22 is/are presented for further examination.
Claim(s) 1, 10 and 19 has/have been amended.
Claim(s) 2 and 11 has/have been previously cancelled.
Response to Amendment
Applicant’s amendment(s) to claim(s) 1, 10 and 19 has/have been accepted.
The examiner thanks applicant’s representative for pointing out where s/he believes there is support for the amendment(s).
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 3-10 and 12-22, filed on 8/6/2025, have been fully considered but they are not persuasive. Accordingly, this action has been made FINAL.
Applicant’s arguments with respect to the rejection(s) of claim(s) 1, 3-10 and 12-22, under 35 U.S.C. 103, see the bottom of page 8 to page 9 of applicant’s remarks, filed on 8/6/2025, have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3-10 and 12-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Adjei-Banin et al., US 2013/0268567 A1 (hereinafter “Adjei”) in view of Wang et al., CN 104866608 B (hereinafter “Wang”; Note: Citations are based on the English translation attached) in further view of Zhang et al., US 9,501,550 B2 (hereinafter “Zhang”) in further view of Malloy et al., US 2004/0122844 A1 (hereinafter “Malloy”).
Claims 1, 10 and 19
Adjei discloses a system to generate a database structure with a low-latency key architecture, the system comprising:
a data processing system comprising memory and one or more processors (Adjei, [0013], see one or more processors and one or more computer-readable storage mediums) to:
identify one or more natural keys of a dimensional object (Adjei, [0014], see natural key for a dimension; and Adjei, [0093], see implementation using objects), the dimensional object corresponding to at least a portion of a database structure (Adjei, [0013], see the dimension table may be a Type 2 slowly changing dimension table and may be populated with data records extracted from at least one source system. Each data record may be associated with at least one identifying hash value and at least one attribute hash value; and Adjei, [0093], see implementation using objects) and the one or more natural keys being unique in the database structure to the dimensional object (Adjei, [0014], see after the dimension table is established, the operations compute a set of hash values for an incoming set of data records extracted from the at least one source system. … The at least one identifying hash value may be a hash of a source natural key for a dimension [i.e., hashing of the natural key identifies the dimension] and the at least one attribute hash value may be a hash of non-key attributes of a dimension; and Adjei, [0073], see the business rule for a natural key is that it cannot change and is capable of being uniquely identified in a dimension);
identify a dependency of a fact object on the dimensional object, the fact object corresponding to the database structure (Adjei, [0088], see in order to accurately maintain the relationships between the fact tables and the dimension tables, all dimension tables are loaded first. As dimensions are loaded, each dimension member is assigned a surrogate key value as its primary key and each dimension member is assigned a source ID value indicating the source of the dimensions member's value. The fact tables are loaded after all the dimensions have been populated. During the fact table load the dimension member natural key and source system ID combinations facilitate lookups to retrieve dimension member surrogate keys. After all lookups are complete, the fact data are inserted into the target fact tables; and Adjei, [0093], see implementation using objects);
generate, based on hashing one or more of the natural keys (Adjei, [0014], see after the dimension table is established, the operations compute a set of hash values for an incoming set of data records extracted from the at least one source system. This computing operation may be performed by a collision-free hashing algorithm and may compute at least one identifying hash value and at least one attribute hash value for each data record contained in the incoming set of data records. The at least one identifying hash value may be a hash of a source natural key for a dimension [i.e., hashing of the natural key identifies the dimension] and the at least one attribute hash value may be a hash of non-key attributes of a dimension), a dimensional identifier associated with the dimensional object (Adjei, [0003], see a dimension modeled to capture Type 2 data changes typically consists of: surrogate key, a natural key, a row start date, a row end date, a most recent row indicator (current flag) and dimension attributes. For a given natural key, a change to these dimension attributes is detected and a new row is inserted into the dimension table with a new surrogate key. The row start date, row end date and most recent row indicator for the prior and new version of the rows are adjusted to reflect the new version of the record for the natural key; Adjei, [0055], see in order to uniquely identify records and detect changes, hashing algorithms are used within the ETL processor 105 to create keys that are stored within the global data warehouse 114; and Adjei, [0093], see implementation using objects);
assign, to the dimensional object, the dimensional identifier to the dimensional object (Adjei, [0057], see Natural Key Hash (Hash NK): The Hash NK is created to uniquely identify source records (i.e., natural key of the data). The Hash NK is also used to maintain the integrity of the data relationships based on the natural keys defined within the source system. Each reference or dimensional entity contains a binary hash of the natural key (Hash NK Source) that represents a unique instance of the entity in the data warehouse table. The Hash NK is also used to represent a relationship (one-to-many and many-to-many) between the entities in the data warehouse; and Adjei, [0093], see implementation using objects);
link, the fact object to the dimensional object prior to executing a query operation (Adjei, [0088], see, in order to accurately maintain the relationships between the fact tables and the dimension tables [i.e., see maintaining relationships/linking between the fact table(s) and dimension table(s), where the “fact table(s)” are the “fact object(s)”], all dimension tables are loaded first [i.e., “prior to executing a query operation”]); and
generate the database structure based on the assigned dimensional identifier and linked fact object in accordance with the build operation (Adjei, [0042], see creating a new version of the data set in a dimension or reference table [i.e., “generating the data structure”, where the aforementioned table(s) is/are the data structure]; Adjei, [0088], see, in order to accurately maintain the relationships between the fact tables and the dimension tables [i.e., see maintaining relationships/linking between the fact table(s) and dimension table(s), where the “fact table(s)” are the “fact object(s)”], all dimension tables are loaded first; Adjei, [0057], see Natural Key Hash (Hash NK): The Hash NK is created to uniquely identify source records (i.e., natural key of the data). The Hash NK is also used to maintain the integrity of the data relationships based on the natural keys defined within the source system. Each reference or dimensional entity contains a binary hash of the natural key (Hash NK Source) that represents a unique instance of the entity in the data warehouse table. The Hash NK is also used to represent a relationship (one-to-many and many-to-many) between the entities in the data warehouse; Adjei, [0014], see after the dimension table is established, the operations compute a set of hash values for an incoming set of data records extracted from the at least one source system. … The at least one identifying hash value may be a hash of a source natural key for a dimension [i.e., where the “source natural key for a dimension” is the “dimensional identifier”] and the at least one attribute hash value may be a hash of non-key attributes of a dimension; and Adjei, [0073], see the business rule for a natural key is that it cannot change and is capable of being uniquely identified in a dimension; and Adjei, [0093], see implementation using objects).
Adjei does not appear to explicitly disclose identify a build of the database structure by executing a build operation corresponding to the dimensional identifier of the dimensional object;
the build operation in accordance with one or more parallelization protocols configured to assign the dimensional identifier and link the fact object in parallel and one or more transform protocols configured to generate the dimensional identifier based on hashing the one or more natural keys of the dimensional object for building the database structure, wherein the one or more parallelization protocols comprise one or more instructions for execution by a parallel processor, and wherein the one or more transform protocols comprise one or more instructions for execution by a transform processor configured to execute a transformation model associated with one or more transformation processes for generating transformed keys;
link, concurrently with the assigning, according to the build operation by loading the dimensional identifier in parallel with the fact object.
Wang discloses identify a build of the database structure by executing a build operation corresponding to the dimensional identifier of the dimensional object (Wang, page 2, 2nd full paragraph, see “data warehouse is based on a multidimensional data model of complicated data set, based on the OLAP database (Relational OLAP, ROLAP) query processing to the fact table and a plurality of dimension tables [i.e., where each “dimension table” has a corresponding “dimensional identifier” in order to reference that table and, collectively, all of the tables are the “database structure”] are connected to perform a complicated analysis and inquiry command, the connection operation [i.e., “build operation”] of the performance is always the most important analytical type query processing (OLAP) problem. connection index is a pre-connection establishing connection relationship between two or more tables [i.e., where how to establish the connections between 2 or more tables” are the “one or more transform protocols for building the database structure”] recording the index of connection index recorded in the connection address relationship between different tables of records. If the query processing index can be directly obtained by connecting two surface connection address of record to finish the connection operation, eliminating according to the connection key value for the connection operation cost of the search. connection index is mainly applied to analyzing type database used for optimizing the connection operation of the two lists with larger or more cost table. the bitmap connection index (bitmap join index) is an extended technology of connection index, it is connection-oriented relationship between two tables created bitmap index, a data warehouse typically uses a bitmap connection index optimized connection operation performance of dimension tables and fact tables. the bitmap connection index can be understood as created on the fact table is connected with the fact table tuple attribute of bitmap index, in the WHERE clause of the query comprising the join attribute of the predicate expression, the bitmap connection index can quickly returns the rows of the fact table corresponding to the dimension attribute of the fact table and the dimension table connection condition and the predicate expression. connection index is an important technology in the data warehouse the fact table and the dimension table connection operation performance, its main disadvantage is index memory space cost is large, in the OLAP query relates to a connecting operation between the fact table and multiple dimension, and queries to more attributes dimension table on connected storage space cost of index increased, dimension property on the increasing number of value also results in increasing the bitmap connection index in calculation cost of FIG. the current data warehouse application query transition to a dimension from a multidimensional query, OLAP query includes more and more connection table and dimension table attribute, and update frequency becomes higher and higher, the traditional connection index technology faces great storage and index maintenance cost.”).
Adjei and Wang are analogous art because they are from the same problem-solving area such as processing dimensional data.
It would have been obvious to one of ordinary skill in the art before the effective filing data, having the teachings of Adjei and Wang before him/her, to modify the dimensional database of Adjei to include the database building operation of Wang because it would increase efficiency through query optimization.
The suggestion/motivation for doing so would have been to data warehouse query optimization suitable for large memory, multicore processor platform, see Wang, page 2, 3rd full paragraph.
Therefore, it would have been obvious to combine Wang with Adjei to obtain the invention as specified in the instant claim(s).
The combination of Adjei and Wang does not appear to explicitly disclose the build operation in accordance with one or more parallelization protocols configured to assign the dimensional identifier and link the fact object in parallel and one or more transform protocols configured to generate the dimensional identifier based on hashing the one or more natural keys of the dimensional object for building the database structure, wherein the one or more parallelization protocols comprise one or more instructions for execution by a parallel processor, and wherein the one or more transform protocols comprise one or more instructions for execution by a transform processor configured to execute a transformation model associated with one or more transformation processes for generating transformed keys;
link, concurrently with the assigning, according to the build operation by loading the dimensional identifier in parallel with the fact object.
Zhang discloses the build operation in accordance with one or more parallelization protocols configured to assign the dimensional identifier and link the fact object in parallel and one or more transform protocols configured to generate the dimensional identifier based on hashing the one or more natural keys of the dimensional object for building the database structure (Zhang, Col. 3, lines 9-20, see, on the basis of a multi-copy fault-tolerance mechanism of the Hadoop, a fact table is stored in a database cluster and a Hadoop cluster [i.e., where the tables must first be created/built from a build operation], a main working copy and at least one fault-tolerant copy of the fact table are set, the main working copy is imported into a local database of a working node, and a table corresponding to the main working copy is named according to a unified naming rule; Zhang, Col. 3, lines 37-41, see a parallel OLAP query processing technology is adopted for the main working copy of the local database; and a MapReduce query processing technology is adopted for the fault-tolerant copy in the Hadoop distributed file system; Zhang, Col. 1, lines 44-61, see to reduce network transmission cost of parallel join operation [i.e., “parallelization protocol”, e.g., tells how to perform the join in parallel], in some database systems, collaborative partitioning (hash or range partitioning) [i.e., the hash partitioning is being interpreted as the “one or more transform protocols”] of join key values of a fact table and dimension tables is adopted, so that corresponding primary-foreign key values in the fact table and the dimension tables joined thereto are stored in a distributed mode according to the same partition function, and therefore, tuples of the joins of the fact table and the dimension tables are allocated on the same node in advance, thereby reducing the network transmission cost during the join operation; and Zhang, Col. 8, lines 57-62, see importing records using a group-by aggregate hash table [i.e., keys were hashed and put into hash table] to implement real-time aggregate processing), and wherein the one or more transform protocols comprise one or more instructions for execution by a transform processor configured to execute a transformation model associated with one or more transformation processes for generating transformed keys (Zhang, Col. 1, lines 44-61, see collaborative partitioning (hash or range partitioning) [i.e., the hash partitioning is being interpreted as the “one or more transform protocols”] of join key values of a fact table and dimension tables is adopted, so that corresponding primary-foreign key values in the fact table and the dimension tables joined thereto are stored in a distributed mode according to the same partition function, and therefore, tuples of the joins of the fact table and the dimension tables are allocated on the same node in advance, thereby reducing the network transmission cost during the join operation; and Zhang, Col. 5, lines 20-27, see the parallel OLAP processing model).
Adjei, Wang and Zhang are analogous art because they are from the same problem-solving area such as processing dimensional data.
It would have been obvious to one of ordinary skill in the art before the effective filing data, having the teachings of Adjei, Wang and Zhang before him/her, to modify the dimensional database building operation of the combination of Adjei and Wang to include the database building operation of Zhang because it would increase efficiency.
The suggestion/motivation for doing so would have been to ensure high query processing performance and high fault-tolerance performance, see Zhang, Col. 2, line 64-Col. 3, line 3.
Therefore, it would have been obvious to combine Zhang with the combination of Adjei and Wang to obtain the invention as specified in the instant claim(s).
The combination of Adjei, Wang and Zhang does not appear to explicitly disclose link, concurrently with the assigning, according to the build operation by loading the dimensional identifier in parallel with the fact object.
Malloy discloses link, concurrently with the assigning, according to the build operation by loading the dimensional identifier in parallel with the fact object (Malloy, [0009], see “… Dimensions are collections of related identifiers, or attributes (product, market, time, channel, scenario, or customer, for example) of the data values of the system.”; Malloy, [0021], see “… Metadata for a facts metadata object and one or more dimension metadata objects that are associated with the facts metadata object is stored.”; Malloy, [0065], see each metadata object completes a piece of the big picture showing what the relational data means. Some metadata objects act as a base to directly access relational data by aggregating data or directly corresponding to particular columns in relational tables. Other metadata objects describe relationships [i.e., “linking”] between the base metadata objects and link these base metadata objects together. Ultimately, all of the metadata objects can be grouped together by their relationships to each other, into a metadata object called a cube model. A cube model represents a particular grouping and configuration of relational tables. The purpose of a cube model is to describe OLAP structures to a given application or tool. Cube models tend to describe all cubes that different users might want for the data that are being analyzed. A cube model groups dimensions and facts, and offers the flexibility of multiple hierarchies for dimensions. A cube model conveys the structural information needed by query design tools and applications that generate complex queries on star schema databases; Malloy, Claim 1, see “…, comprising: storing metadata for a facts metadata object and one or more dimension metadata objects that are associated with the facts metadata object; …”; and Malloy, [0301], see “… Further, operations described herein may occur sequentially or certain operations may be processed in parallel, or operations described as performed by a single process may be performed by distributed processes.”).
Adjei, Wang, Zhang and Malloy are analogous art because they are from the same problem-solving area such as processing dimensional data.
It would have been obvious to one of ordinary skill in the art before the effective filing data, having the teachings of Adjei, Wang, Zhang and Malloy before him/her, to modify the dimensional database building operation of the combination of Adjei, Wang and Zhang to include the loading of Malloy because it would return results very quickly.
The suggestion/motivation for doing so would have been to return multidimensional results sets naturally and compute some or all of the results in advance of a query, see Malloy, [0019] and [0020].
Therefore, it would have been obvious to combine Malloy with the combination of Adjei, Wang and Zhang to obtain the invention as specified in the instant claim(s).
Claim(s) 10 and 19 recite(s) similar limitations to claim 1 and is/are rejected under the same rationale.
With respect to claim 19, Adjei discloses a non-transitory computer readable medium including one or more instructions stored thereon (Adjei, [0013], see one or more computer-readable storage mediums).
Claims 3 and 12
With respect to claims 3 and 12, the combination of Adjei, Wang, Zhang and Malloy discloses wherein the dimensional object and the fact object correspond to respective table structures of the database structure (Adjei, [0088], see fact tables and dimension tables; and Adjei, [0093], see implementation using objects).
Claims 4 and 13
With respect to claims 4 and 13, the combination of Adjei, Wang, Zhang and Malloy discloses the system further to:
initiate, in parallel, the build of the database structure to generate the dimensional identifier, assign the dimensional identifier, and link the fact object (Adjei, [0057], see Natural Key Hash (Hash NK): The Hash NK is created to uniquely identify source records (i.e., natural key of the data). The Hash NK is also used to maintain the integrity of the data relationships based on the natural keys defined within the source system. Each reference or dimensional entity contains a binary hash of the natural key (Hash NK Source) that represents a unique instance of the entity in the data warehouse table. The Hash NK is also used to represent a relationship (one-to-many and many-to-many) between the entities in the data warehouse).
Claims 5, 14 and 20
With respect to claims 5, 14 and 20, the combination of Adjei, Wang, Zhang and Malloy discloses the system further to:
initiate, in parallel, the build of the database structure to generate the dimensional identifier and link the fact object in parallel with the assigning (Adjei, [0057], see Natural Key Hash (Hash NK): The Hash NK is created to uniquely identify source records (i.e., natural key of the data). The Hash NK is also used to maintain the integrity of the data relationships based on the natural keys defined within the source system. Each reference or dimensional entity contains a binary hash of the natural key (Hash NK Source) that represents a unique instance of the entity in the data warehouse table. The Hash NK is also used to represent a relationship (one-to-many and many-to-many) between the entities in the data warehouse).
Claims 6 and 15
With respect to claims 6 and 15, the combination of Adjei, Wang, Zhang and Malloy discloses wherein the dimensional identifier comprises a universally unique identifier (UUID) derived from one or more of the natural keys (Adjei, [0003], see a dimension modeled to capture Type 2 data changes typically consists of: surrogate key, a natural key, a row start date, a row end date, a most recent row indicator (current flag) and dimension attributes. For a given natural key, a change to these dimension attributes is detected and a new row is inserted into the dimension table with a new surrogate key. The row start date, row end date and most recent row indicator for the prior and new version of the rows are adjusted to reflect the new version of the record for the natural key; and Adjei, [0055], see in order to uniquely identify records and detect changes, hashing algorithms are used within the ETL processor 105 to create keys that are stored within the global data warehouse 114).
Claims 7 and 16
With respect to claims 7 and 16, the combination of Adjei, Wang, Zhang and Malloy discloses wherein the system associates the dimensional identifier with the fact object by embedding the dimensional identifier in the fact object (Adjei, [0057], see Natural Key Hash (Hash NK): The Hash NK is created to uniquely identify source records (i.e., natural key of the data). The Hash NK is also used to maintain the integrity of the data relationships based on the natural keys defined within the source system. Each reference or dimensional entity contains a binary hash of the natural key (Hash NK Source) that represents a unique instance of the entity in the data warehouse table. The Hash NK is also used to represent a relationship (one-to-many and many-to-many) between the entities in the data warehouse; and Adjei, [0093], see implementation using objects).
Claims 8 and 17
With respect to claims 8 and 17, the combination of Adjei, Wang, Zhang and Malloy discloses where the system embeds the dimensional identifier in a field of the fact object corresponding to a reference to the dimensional object (Adjei, [0057], see Natural Key Hash (Hash NK): The Hash NK is created to uniquely identify source records (i.e., natural key of the data). The Hash NK is also used to maintain the integrity of the data relationships based on the natural keys defined within the source system. Each reference or dimensional entity contains a binary hash of the natural key (Hash NK Source) that represents a unique instance of the entity in the data warehouse table. The Hash NK is also used to represent a relationship (one-to-many and many-to-many) between the entities in the data warehouse; and Adjei, [0093], see implementation using objects).
Claims 9 and 18
With respect to claims 9 and 18, the combination of Adjei, Wang, Zhang and Malloy discloses the system further to:
execute the build operation by referencing the dimensional identifier (Adjei, [0089], see after the data is stored in the data mart, a multi-dimensional cube 119 is used to allow a user to efficiently and effectively use the data. The multi-dimensional cube 119 is a pre-aggregated data structure that allows users to analyze measures by associated dimensions. The multidimensional cube allows users to efficiently and effectively, slice, dice, drill up and drill down on data without requiring users to have familiarity with the underlying data infrastructure. The cube manages different types of data relationships to vary levels of detail. The pre-aggregation of data and pre-defined relationships allow the cube to retrieve data quickly when queried; and Wang, page 2, 2nd full paragraph, see the mapping above regarding claim(s) 1, 10 and 19).
Claims 21 and 22
With respect to claims 21 and 22, the combination of Adjei, Wang, Zhang and Malloy discloses wherein the one or more parallelization protocols or one or more transform protocols comprise instructions to manage extract, transform, and load (ETL) operations of one or more steps of the build of the database in parallel (Adjei, [0061], see “FIG. 3 outlines the data flow for parent dimension processing. During this stage, in step 1, the pre-staging database 104, using an ETL [i.e., “E(xtract)T(ransform)L(oad)”] processor 103, extracts the source data from a source system 102 [i.e., “extract”]. In Step 2, the ETL processor 105 computes a Hash NK Source and Hash NK for each record extracted from the source system 102 using a hashing algorithm [i.e., “transform”]. In Step 3, the ETL processor 105 computes a Hash Full for each record extracted from the source system 102 using the hashing algorithm. In Step 4, the transformed data is loaded into a staging database 112 [i.e., “load”] and, in Step 5, for each parent dimension the Current Hashes table is populated which is used for loading the data into the data warehouse database 114, as will be discussed more fully below. Now each parent dimension record will be represented by a unique set of hash values.”; and Malloy, [0301], see operations described herein may occur sequentially or certain operations may be processed in parallel, or operations described as performed by a single process may be performed by distributed processes”).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
– Jain et al., 7490093 for generating a schema-specific load structure;
– Barnes et al., 6931418 for partial order analysis of multi-dimensional data;
– Aggarwal et al., 2009/0281985 for transforming and loading data into a fact table in a data warehouse; and
– Pu et al., CN 104063486 for big data distributed storage.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Point of Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUBERT G CHEUNG whose telephone number is (571) 270-1396. The examiner can normally be reached M-R 8:00A-5:00P EST; alt. F 8:00A-4:00P EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Neveen Abel-Jalil can be reached at (571) 270-0474. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
HUBERT G. CHEUNG
Assistant Examiner
Art Unit 2152
Examiner: Hubert Cheung
/Hubert Cheung/Assistant Examiner, Art Unit 2152Date: February 5, 2026
/NEVEEN ABEL JALIL/Supervisory Patent Examiner, Art Unit 2152