DETAILED ACTION
Claims 1 – 20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
With regard to the Non-Final Office Action from 06 October 2025, the Applicant has filed a response on 06 January 2026.
Response to Arguments
With regard to the 35 U.S.C. 101 rejection given to the claims for being directed to a judicial exception without significantly more, the Applicant argues that the claims do not meet the categories of an abstract idea (Remarks: page 11), stating that the independent claims cannot merely be reduced to a mental process but are instead directed to concrete steps of performing modification by machine learning models, further stating (Remarks: page 12 par 1) that the Office ‘fails to provide any reasoning or rationale as to why the claims would fall into one of these enumerated sub-groupings let alone fall within the criteria for certain methods of organizing human activity.’ The Examiner however did provide an explanation as to why the claims are being considered to be covered under a mental process. Taking claim 1 for example, a human may mentally consider a document on different levels of granularity, such as the page, region, or token levels, the human may address the document based on the granularity levels by mentally appointing feature vectors to it. The appointment of the feature vectors to the document could be based on predetermined relationships between the granularities and the certain numbers to be in the vector. A mental computation may be applied to the first feature vector based on a predetermined classification, in order to obtain a self-attention first feature vector. Predetermined cross-attention values may then be applied to the self-attention first feature vector to obtain a cross-attention first feature vector, and then, mentally performing a classification task based on the obtained cross-attention first feature vector, the human being able to associate aspects of the cross-attention first feature vector with a classification category. The entire technique here does appear to be one that can be mentally performed in the human mind, through the use of a pen and paper. The provided first machine learning model is one able to classify a portion of a document based on the relationship between granularities of the plurality of granularities. This is a task that a human would be able to mentally perform. The second machine learning model performs a classification task by extracting relationships between granularities of the plurality of granularities of the documents. A human is also able to mentally indicate a relationship between certain granularities of a plurality of granularities present in a document. Independent claims 9 and 16 can also somewhat similarly be addressed, leaving the Examiner to indicate that these tasks can be performed mentally, contrary to the Applicant’s indication.
Next, the Applicant indicates (Remarks: page 12) that the claims integrate the alleged abstract idea into a practical application, the Applicant referring to [0022] of the Specification to specific improvements including a ‘multi-modal multi-granular model… [that] generates data that can be used to perform multiple distinct tasks (e.g., entity recognition, document classification, etc.) at multiple granularities which reduces model storage cost and maintenance as well as improves performance over conventional systems as a result of the model obtaining information from regions at different granularities’ and that the models provide ‘a single model that … provides optimal results for a plurality of tasks thereby reducing training and maintenance costs required for the models to perform these tasks separately.’ Generating data that is to be used for performing multiple distinct tasks such as entity recognition and document classification, is by itself, a mental process, this being performed by mentally assessing the document to determine particular factors that would be useful for recognising entities or for classifying documents. Further, having a single model that is able to perform a plurality of tasks is by itself, something a human would be capable of doing — a single human being capable of performing several tasks such as recognising entities or classifying documents. The Examiner maintains that the indicated practical applications are by themselves also abstract mental ideas.
The Applicant has introduced further information regarding the classification task comprising the extraction of relationships between granularities of the plurality of granularities of the document, but this does not actually provide detail regarding the way the classification is being performed, nor of what the classification actually entails. It simply indicates that that the classification involves the extraction of relationships between sections of a document, which is a task that is also a mental process by itself.
The Examiner hereby maintains the 35 U.S.C. 101 rejection based on the claims being directed to a judicial exception without significantly more.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 and 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more.
Independent claim 1 recites the limitations of obtaining first and second feature vectors from a document for a plurality of granularities of the document based on page, region or token level, applying a machine learning model that was trained to classify at least a portion of the document, to modify the first feature vector to generate a self-attention first feature vector of a set of modified features using a set of self-attention values to determine a relationship within a first type of feature, applying the machine learning model to also modify the self-attention first feature vector to generate cross-attention first feature vector using a set of cross-attention values, and then providing the cross-attention first feature vector to a second machine learning model in order to perform a classification task such that the classification comprises extracting relationships between granularities of the plurality of granularities of the document.
Nothing in the claim precludes the claimed technique from being performed in the human mind. The entire process involves data gathering, data manipulation and data presentation. First and second feature vectors for a document are obtained at different levels, which can be performed by having a human assign particular values to sections of document in order to represent it, also possibly through the use of a dedicated algorithm to mentally assign the vector values, apply a mental computation to the first feature vector to generate a self-attention first feature vector based on a set of self-attention values. The human may then modify the self-attention first feature vector to generate a cross-attention first feature vector using available pre-determined cross-attention values, the modification being a mental calculation to manipulate the self-attention first feature vector to produce the cross-attention first feature vector making use of the cross-attention values. Finally, the cross-attention first feature vector is made available so a user may apply the values contained in this vector to perform a classification task by associating aspects of the cross-attention first feature vector with a classification category, the classification task comprising the extraction of relationships between granularities of the plurality of granularities of the documents being shown as the human presenting the relationship between granularities of the document. The presence of the recited machine learning algorithms serves as a tool useful for performing the required computations. The claim hereby recites a mental process.
This judicial exception is not integrated into a practical application as the claim simply teaches of collecting data, analysing to obtain more data, transforming data, and then presenting data to a next stage for classification, without specific details on the performance of the classification. The obtaining the set of feature vectors is a data gathering step, then the feature vector gets modified using a set of self-attention values to determine relationships within a first type of feature, and a set of cross-attention values to determine relationships between the first type of feature and a second type of feature, all these being a data manipulation step. Finally, the set of modified data gets provided in order to perform a classification task that involves the extraction of a relationship between granularities of the plurality of granularities of the document, which is data presentation and further data manipulation.
The invention is not tied to any particular defining structure and simply provides instructions to apply the judicial exception. The techniques can be performed by a generic computer which would be presented as a tool to implement the abstract idea (classifiable as automation of the mathematical concept). The Specification in [0034] provides the use of one or more computer processors which would be necessary to enact these steps. This is presented in generic terms, such that a general-purpose computer could be used to address the claim limitations. The Specification in [0046] provides that an OCR, CNN, or another machine learning model may be used for generating feature vectors, and according to [0002], the features obtained may ‘be provided to other machine learning models to perform various tasks.’ The machine learning model here modifies available feature vectors, which is a mathematical concept. A classifier which can also be another machine learning model (Specification: [0030]) applies the obtained features to perform a classification task, classifying items as one thing or one of another based on certain received features is able a task which a human can mentally perform as well as can be obtained by a mathematical computation with assigned classification labels based on computed values. Mentioning a machine learning model in the claim can simply refer to the application of a general-purpose computer applied to performing the mathematical algorithm. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the invention is not tied to a practical application.
The claim provides techniques that amount to no more than mere instructions that apply the judicial exception which can be performed by a generic device. While the claim mentions performing a classification task, there is no specificity given regarding the type of classification that’s performed. The claim also mentions machine learning models, but fails to recite specifics on how the models are performed, and therefore still does not amount to significantly more than the mentioned judicial exception. Mere instructions to apply an exception using a generic device cannot provide an inventive concept. Claim 1 is not eligible.
Claim 2 provides that the first feature is textual and the second feature is visual. A human may observe both textual and visual features. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception.
Claim 3 provides that a first subset of self-attention values of the set of self-attention values are determined by calculating self-attention for the textual features. This claim provides purely mathematical calculations and does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception.
Claim 4 provides a first subset of cross-attention values of the set of cross-attention values being determined by calculating cross-attention between textual and visual features. This claim provides purely mathematical calculations and does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception.
Claim 5 provides that the set of self-attention values comprise an alignment bias indicating a relationship between tokens and regions of the document. Calculating an alignment bias of a relationship between tokens and regions of the document indicates a mathematical concept. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception.
Claim 6 provides that the set of features comprises a fixed dimension vector including feature information, spatial information, position information, type information, or a combination. Sets of features that indicate the mentioned information are classified under mathematical concepts. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception.
Claim 7 provides the plurality of granularities of the document to include a page level, region level and a token level granularity. These refer to a purely mental process of selecting a level of granularity. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception.
Claim 8 provides that the set of features comprises a fixed dimension vector. The selection of a fixed dimension for a vector can be a purely mental process. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception.
Independent claim 9 recites the limitations of obtaining first and second feature vectors from a document for a plurality of granularities of the document based on page, region or token level, applying a machine learning model that was trained based on relationships between granularities of a plurality of granularities, to modify the first feature vector to generate a self-attention first feature vector of self-attention weights according to the first feature vector, as well as to modify the second feature vector to generate a self-attention second feature vector of self-attention weights according to the second feature vector, both from the plurality of granularities, applying a machine learning model to modify the self-attention first feature vector to generate cross-attention first feature vector with a first set of cross-attention weights based on the self-attention second feature vector, as well as modifying the self-attention second feature vector to generate a cross-attention second feature vector with a second set of cross attention weights based on the self-attention first feature vector, and then providing a portion of the cross-attention first feature vector or the cross-attention second feature vector to a classifier to perform a task, the classification comprising the extraction of features from the plurality of granularities so as to determine relationships between the granularities of the pluralities of granularities.
Nothing in the claim precludes the claimed technique from being performed in the human mind. The entire process involves data gathering, data manipulation and data presentation, performed as a mental task. First and second feature vectors for a document are obtained at different levels, which can be performed by having a human assign particular values to sections of a document in order to represent the document also possibly through the use of a dedicated algorithm to mentally assign the vector values, apply a mental computation on both feature vectors in order to generate self-attention first and second vectors based on first self-attention and second self-attention weights. The human may the modify the self-attention first feature vector to generate a cross-attention feature vector, and modify the self-attention second feature vector to generate a cross-attention second feature vector, these being performed mentally through mathematical computations. Finally, the human applies the values of the vectors of either of the cross-attention first feature vector and the cross-attention second feature vector into performing a classification task that comprises the extraction of features from the plurality of granularities of the document, and determining relationships between the granularities of the plurality of granularities. The presence of the recited machine learning algorithms serves as a tool useful for performing the required computations. The claim hereby recites a mental process.
This judicial exception is not integrated into a practical application as the claim simply teaches of collecting data, analysing to obtain more data, transforming data, and then presenting data to a next stage for classification, without any specificities on the classification. The obtaining the set of feature vectors is a data gathering step, then the set of features get modified using a set of self-attention values to determine relationships within a first type of feature, and a set of cross-attention values to determine relationships between the first type of feature and a second type of feature, all these being a data manipulation step. Finally, the set of modified data gets provided in order to perform a classification task, which is data presentation and further data manipulation.
The invention is not tied to any particular defining structure and simply provides instructions to apply the judicial exception. The techniques can be performed by a generic computer which would be presented as a tool to implement the abstract idea (classifiable as automation of the mathematical concept). The Specification in [0034] provides the use of one or more computer processors which would be necessary to enact these steps. This is presented in generic terms, such that a general-purpose computer could be used to address the claim limitations. The Specification in [0046] provides that an OCR, CNN, or another machine learning model may be used for generating feature vectors, and according to [0002], the features obtained may ‘be provided to other machine learning models to perform various tasks.’ The machine learning model here modifies available feature vectors, which is a mathematical concept. A classifier which can also be another machine learning model (Specification: [0030]) applies the obtained features to perform a classification task, classifying items as one thing or one of another based on certain received features is able a task which a human can mentally perform as well as can be obtained by a mathematical computation with assigned classification labels based on computed values. Mentioning a machine learning model in the claim can simply refer to the application of a general-purpose computer applied to performing the mathematical algorithm. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the invention is not tied to a practical application.
The claim provides techniques that amount to no more than mere instructions that apply the judicial exception which can be performed by a generic device. While the claim mentions performing a classification task, there is no specificity given regarding the type of classification that’s performed. The claim also mentions machine learning models, but fails to recite specifics on how the models are performed, and therefore still does not amount to significantly more than the mentioned judicial exception. Mere instructions to apply an exception using a generic device cannot provide an inventive concept. Claim 9 is not eligible.
Claim 10 provides teaching for causing a CNN to generate the first feature vector based on a set bounding boxes within a region of the document. A Convolutional Neural Network being used to generate a feature vector based on certain limits indicates a mathematical concept. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception.
Claim 11 provides encoding the first feature vector with the first set of self-attention weights by adding an alignment bis and a relative distance bias. The encoding of feature vectors by certain weights is a mental process that can be performed by a human. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception.
Claim 12 provides an alignment bias comprising a matrix for a relationship between a token in the document and a region of the document. This process involves a mathematical concept for a relationship between representations of token and region of a document. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception.
Claim 13 provides that the relationship includes at least one of inside, above, below, right of and left of. This relationship is a purely mental observation that can be made by a human. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception.
Claim 14 provides that the relative distance bias includes a matrix of distance value calculated based on bounding boxes associated with regions of the document. This process involves mathematical concept for the relative distance bias being a matrix of distance values according to a limit. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception.
Claim 15 provides that the task comprises at least one of document, region, entity and token recognition. These can be performed by a human as a mental task. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception.
Independent claim 16 recites the limitations of obtaining a training data set that has a set of documents and a set of features extracted from the document, training a multi-modal multi-granular model to generate feature vectors that has information obtained from a plurality of document regions, and relationships between features from distinct regions of the document regions, having a first type and second type of features such that the relationship includes an alignment relation between two or more regions of the document, whereby the model is caused to generate an output by generating a first set of features of a first document for a plurality of granularities, applying a first machine learning model to modify the first set of features of the first document to generate a set of modified self-attention features using a set of self-attention values to determine relationships within the first type of feature, applying the first machine learning model yet still to modify the first set of features of the first document to generate a set of modified cross-attention features with a first set of cross-attention weights based on a second set of modified self-attention features of the first document and then providing the set of modified features to a second machine learning model to perform a classification task, the classification task including the determination of a relationship between a first region and a second region of the plurality of regions of the document.
Apart from mentioning the processors and memory that are comprised under the system, nothing in the claim precludes the claimed technique from being performed in the human mind. The entire process involves data gathering, data manipulation and data presentation, including performance through mathematical manipulation. A set of documents could be collected and a mathematical algorithm applied to extract features from the documents. The training of the multi-modal multi-granular model can be taken as training a human to be able to perform classification through the generation of feature vectors including information obtained from a plurality of document regions and from relationships between features from distinct document regions, this being performed through having the human assign particular values to sections of the document in order to represent those sections of the document, and this training involves the application of mathematical algorithms. The features are of a first type and a second type, and the assignment of these feature types involves a mental process. The training of the model can be the process of ensuring that a human is able to understand the concepts of the alignment relationships between regions of available documents. A human trained to understand this multi-modal multi-granular model may then apply the knowledge to generate an output by generating features of a first document for a plurality of granularities of the first document, modifying the obtained document features to obtain a new set of modified features by addressing values within one document feature and also addressing values across the two types of features, to then present the set of modified features to a second human who may be better able to perform a classification action making use of the set of modified features, whereby the classification involves the determination of a relationship between a first region of the document and a second region of the document. The claim hereby recites a mental process.
This judicial exception is not integrated into a practical application as the claim simply teaches of collecting data, analysing to obtain more data, transforming data, and then presenting data to a next stage for classification, without any specificities on the classification. While the claim does mention processor and a memory, these are presented in generic terms. The obtaining the set of features is a data gathering step, then the set of features get modified using a set of self-attention values to determine relationships within a first type of feature, and a set of cross-attention values to determine relationships between the first type of feature and a second type of feature, all these being a data manipulation step. Finally, the set of modified data gets provided in order to perform a classification task, which is data presentation and further data manipulation.
The invention is not tied to any particular defining structure and simply provides instructions to apply the judicial exception. The techniques can be performed by a generic computer which would be presented as a tool to implement the abstract idea (classifiable as automation of the mental process steps). The Specification in [0034] provides the use of one or more computer processors. This is presented in generic terms, such that a general-purpose computer could be used to address the claim limitations. The Specification in [0046] provides that an OCR, CNN, or another machine learning model may be used for generating feature vectors, and according to [0002], the features obtained may ‘be provided to other machine learning models to perform various tasks.’ Textual and visual features get obtained from a document, which a human can perform by observing the constituents of a document. A machine learning model may also be applied to perform a classification task, classifying items as one thing or one of another based on certain received features is able a task which a human can mentally perform. The presence of a machine learning model in the claim can simply refer to the application of a general-purpose computer to perform a mental process. The classification task here can be performed mentally, as the claim simply applies the use of general machine learning models to perform mental tasks. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the invention is not tied to a practical application.
The claim provides techniques that amount to no more than mere instructions that apply the judicial exception which can be performed by a generic device. Merely mentioning storage media and a processor amount to no more than general-purpose hardware used as tools to implement the abstract idea and do not provide any particular application other than applying them for the purpose of implementing a judicial exception. While the claim mentions performing a classification task, there is no specificity given regarding the type of classification that’s performed. Mere instructions to apply an exception using a generic device cannot provide an inventive concept. Claim 16 is not eligible.
Claim 17 provides performing pretraining operation on the multi-modal multi-granular model by causing the model to perform a self-supervision task including an alignment loss function to reinforce alignment information that’s generated by the model. This pre-training operation of the model to perform a self-supervision task by an alignment loss function involves a mathematical concept. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception.
Claim 18 provides that the alignment loss function comprises calculating the binary cross entropy loss between alignment information generated by the model and an alignment model. The use of an alignment loss function for calculating a cross entropy binary loss between alignment information generated by both models involves a mathematical concept.
Claim 19 provides that the first type of feature comprises semantic features and the second type comprises visual features. Establishing semantic and visual features as the features to be extracted from a document is a mental process that can be established by a human. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception.
Claim 20 provides that the generated feature vectors are used to perform at least one of document classification, region re-classification and entity recognition. Performing these tasks can be mentally performed by a human, the computing of the feature vectors being a mathematical concept, and linking the computed vectors to the particular classification involves a mental assignment that can be arranged by a human. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception.
Potentially Allowable Subject Matter
Claims 1–20 would potentially be allowable if rewritten or amended to overcome the rejection under 35 U.S.C. 101, set forth in this Office action.
The following is a statement of reasons for the indication of allowable subject matter:
The same prior art applied to the claims in the Non-Final Office Action from 05 December 2024 is applicable to the respective claims still.
With regard to independent claim 1, the prior art of record taken alone or in
combination fail to teach, inter alia, a machine learning model applied to modifying a first document feature vector to generate a self-attention first feature vector, using the machine learning model to modify the self-attention first feature vector and the self-attention second feature vector to generate a cross-attention first feature vector, and providing the cross-attention first feature vector to a second machine learning model in order to perform a classification task.
Claim 1 would hereby be allowed over the prior art of record if rewritten or amended to overcome the rejection(s) under 35 U.S.C. 101, set forth in this Office action.
Claims 2, 3, 4, 5, 6, 7 and 8 would be allowable if rewritten to overcome the rejection under 35 U.S.C. 101, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
With regard to independent claim 9, the prior art of record taken alone or in
combination fail to teach, inter alia, a machine learning model applied to modifying a first document feature vector to generate a self-attention first feature vector, modifying a second document feature vector to generate a self-attention second feature vector, using the machine learning model to modify the self-attention first feature vector and the self-attention second feature vector to generate a cross-attention first feature vector and a cross-attention second feature vector respectively, and providing at least a portion of the cross-attention first feature vector or the cross-attention second feature vector to a classifier in order to perform a task.
Claim 9 would hereby be allowed over the prior art of record if rewritten or amended to overcome the rejection(s) under 35 U.S.C. 101, set forth in this Office action.
Claims 10, 11, 12, 13, 14 and 15 would be allowable if rewritten to overcome the rejection under 35 U.S.C. 101, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
With regard to independent claim 16, the prior art of record taken alone or in
combination fail to teach, inter alia, a first machine learning model applied to modifying a first set of features to generate a set of modified self-attention first features using a set of self-attention values, the same first machine learning model to be applied to modify the first set of features of the first document to generate a set of modified cross-attention features with a first set of cross-attention weights based on a second set of modified self-attention features of the first document, and providing the set of modified features to a second machine learning model in order to perform a classification task.
Claim 16 would hereby be allowed over the prior art of record if rewritten or amended to overcome the rejection(s) under 35 U.S.C. 101, set forth in this Office action.
Claims 17, 18, 19 and 20 would be allowable if rewritten to overcome the rejection under 35 U.S.C. 101, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the Examiner should be directed to OLUWADAMILOLA M. OGUNBIYI whose telephone number is (571)272-4708. The Examiner can normally be reached Monday – Thursday (8:00 AM – 5:30 PM Eastern Standard Time).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s Supervisor, PARAS D. SHAH can be reached at (571) 270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/OLUWADAMILOLA M OGUNBIYI/Examiner, Art Unit 2653
/Paras D Shah/Supervisory Patent Examiner, Art Unit 2653
04/06/2026