Prosecution Insights
Last updated: April 19, 2026
Application No. 18/084,267

DATA PROCESSING METHOD AND APPARATUS

Non-Final OA §101§103
Filed
Dec 19, 2022
Examiner
SPRATT, BEAU D
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
342 granted / 432 resolved
+24.2% vs TC avg
Strong +27% interview lift
Without
With
+26.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
37 currently pending
Career history
469
Total Applications
across all art units

Statute-Specific Performance

§101
12.2%
-27.8% vs TC avg
§103
63.7%
+23.7% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
5.4%
-34.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 432 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-18 are presented in the case. Priority Acknowledgment is made of applicant's claim for foreign priority based on application CN202010596738.4 filed in China on 06/28/2020. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement submitted on 10/08/2023 and 02/21/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: “Data Processing Using Hyperbolic Feature Extraction and Classification Networks” Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 4-6, 12 and 15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (“2019 PEG”) Claims 4-6, 12 and 15 have the following abstract idea analysis. Step 1: The claim is directed to “a method and appartus”. The claims are directed to the statutory categories accordingly. Step 2A Prong 1: claims recite the abstract idea limitations of "calculate a geometric center of the embedding vector ", " convert the embedding vector obtained by the first processing layer into a vector expressed", "calculating the gradient" and "converting the gradient to a gradient expressed in the hyperbolic space". These limitations include mathematical concepts see MPEP § 2106.04(a)(2)) where it cites "the phrase “calculating the force of the object by multiplying its mass by its acceleration” is using a textual replacement for the particular equation " and "a conversion between binary coded decimal and pure binary". The specification also provides example calculation of a midpoint and conversion formula (See USPGPUB ¶210 and ¶215). Thus, these steps are an abstract idea in the “mathematical concept”. Other sections of the claims such as "obtaining to-be-processed data", processing the to-be-processed data", "outputting the processing result", "extracting", "classifying", "the neural network comprises a feature extraction network and a classification network", "a conformal conversion layer" and "updating the neural network" are advanced processes, too generic or high level to be listed as a judicial exception given the available descriptions and MPEP comparisons. Step 2A Prong 2: The judicial exceptions recited in these claims are not integrated into a practical application. Merely invoking "a trained neural network", "to be processed data", "a processor", or "memory" does not yield eligibility. Claims are still in line with mental concepts such as claim 4-6, 12 and 15 are not specific to a practical application. The additional elements as such are processors and instructions which do not include specialized hardware. See MPEP § 2106.05(f). Claim 4-6, 12 and 15 do not include a particular field but even doing so may not be sufficient to overcome the abstract idea rejection. Merely applying an model to a field or data without an advancement in the new field or new hardware is ineligible. MPEP § 2106.05(h). Step 2B: The claims do not contain significantly more than their judicial exceptions. Processors, memory and other hardware are in their standard forms in the field. These additional elements are well-understood, routine, and conventional activity, see MPEP 2106.05(d)(II). Claims lacks any particular "how" or algorithm for a solution in a field in a novel way. Claims require more specificity on processes that would be incapable of simple mathematics, mental processes or use more substantial structure than conventional devices such as non-textbook implementations. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4, 7-13 and 15-18 are rejected under 35 U.S.C. 103 as being unpatentable over Sikka et al. (US 20190325342 A1 hereinafter Sikka) in view of Gao et al. (US 20170032035 A1 hereinafter Gao). As to independent claim 1, Sikka teaches a data processing method, wherein the method comprises: [process data via embedding ¶5-6] obtaining to-be-processed data; [text and image data ¶36 "if the multimodal content is an image with a caption, the first modality feature vector may represent a first modality (text) of the multimodal content (caption portion)"] processing the to-be-processed data using a trained neural network to obtain a processing result; and outputting the processing result, wherein [training for results Fig. 7 ¶41 "For training, a database with images having semantic tags may be used. Keywords from image captions may then be extracted and used as labels for training the images. A ranking loss algorithm can be adjusted to push similar images and tags (words) together and vice-versa. The mean average precision (MAP) may be output for evaluating the training results."] the feature extraction network is configured to extract a feature vector expressed by the to-be-processed data in hyperbolic space, and [embeds (extracts) vector into non-Euclidean (hyperbolic ¶7) space ¶37 "In block 310, the first modality feature vector of the multimodal content and the second modality feature vector of the multimodal content are semantically embedded in a non-Euclidean geometric space"] the classification network is configured to process the feature vector based on an operation rule of the hyperbolic space, to obtain the processing result. [hyperbolic network is for categorizing (classifies) input image features into a result (i.e. "land animals") ¶33 "using a non-Euclidean space such as, for example, a Poincaré space allows distinct classes to form in broader categories such as “plants” and “land animals.""], [uses warping (rule) ¶34 "hyperbolic embeddings provide a way to capture distances that grow exponentially through a logarithm-like warping of distance space."] Sikka does not specifically teach the neural network comprises a feature extraction network and a classification network. However, Gao teaches the neural network comprises a feature extraction network and a classification network, [separate sections for classification (DNN) and feature extractions (Fig.6 602), ¶19-20 " a deep structured semantic model (DSSM) may be used to project an input item to an output item in a semantic space. For example, the input item may correspond to an input vector that represents one or more words, while the output item may correspond to a concept vector that expresses semantic information regarding the word(s)"…" a DSSM comprises a pair of DNNs, where one DNN may be used for mapping the source (e.g., text) into a semantic vector"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the models disclosed by Sikka by incorporating the neural network comprises a feature extraction network and a classification network disclosed by Gao because both techniques address the same field of data machine learning and by incorporating Gao into Sikka provide more relevant results with faster convergence [Gao ¶78]. As to dependent claim 2, the rejection of claim 1 is incorporated, Sikka and Gao further teach wherein the to-be-processed data comprises at least one of the following: natural language data, knowledge graph data, gene data, or image data. [Sikka image and caption (natural language) ¶36] As to dependent claim 4, the rejection of claim 1 is incorporated, Sikka and Gao further teach wherein the feature extraction network comprises a first processing layer and a second processing layer; [Sikka embedding, linear and projection layers ¶40] the first processing layer is configured to process the to-be-processed data, to obtain an embedding vector represented by the to-be-processed data in the hyperbolic space; and [Sikka embeds vector into non-Euclidean (hyperbolic ¶7) space ¶37 "In block 310, the first modality feature vector of the multimodal content and the second modality feature vector of the multimodal content are semantically embedded in a non-Euclidean geometric space"] the second processing layer is configured to calculate a geometric center of the embedding vector in the hyperbolic space, to obtain the feature vector. [Sikka performs clustering to find center "K" of the vector ¶53 "allows learning of a clustering of users in addition to the user embeddings inside the same network. Learning the clusters jointly allows for a better and automatic sharing of information about similar images between user embeddings than what is available explicitly in the dataset. To this end, an additional matrix w.sub.c∈custom-character.sup.K×D is maintained, where C is the number of clusters, a hyperparameter. Each row of w.sub.C, represented by c.sub.i,l=1, 2, . . . , K, is the vector representing the cluster center of the l.sup.th cluster"] As to dependent claim 7, the rejection of claim 1 is incorporated, Sikka and Gao further teach wherein the classification network is configured to: process the feature vector based on the operation rule of the hyperbolic space to obtain a to-be-normalized vector expressed in the hyperbolic space; and [Sikka vectors ready for normalizing and categories for classification ¶56-57, ¶40 "Hierarchies are determined by the normalization of the embedded vectors. As illustrated in a view 400A of FIG. 4A, Euclidean space does not inherently preserve hierarchies as the content is spread across a single plane"] map the to-be-normalized vector to the Euclidean space, and perform normalization processing on the to-be-normalized vector mapped to the Euclidean space, to obtain the processing result. [Sikka normalizing and Fig. 4A 400A illustrates mapping to Euclidean ¶56-57, ¶40 "400A of FIG. 4A, Euclidean space does not inherently preserve hierarchies as the content is spread across a single plane"] As to independent claim 8, Sikka teaches a data processing method, wherein the method comprises: [process data via embedding ¶5-6] obtaining training data and a corresponding category label; [images and tags/labels ¶41 " For training, a database with images having semantic tags may be used. Keywords from image captions may then be extracted and used as labels for training the images."] processing the training data by using a neural network, to obtain a processing result, wherein [training for results Fig. 7 ¶41 "For training, a database with images having semantic tags may be used. Keywords from image captions may then be extracted and used as labels for training the images. A ranking loss algorithm can be adjusted to push similar images and tags (words) together and vice-versa. The mean average precision (MAP) may be output for evaluating the training results."] the feature extraction network is configured to extract a feature vector of the training data, and [embeddings are part the network that extracts vector into non-Euclidean (hyperbolic ¶7) space ¶37 "In block 310, the first modality feature vector of the multimodal content and the second modality feature vector of the multimodal content are semantically embedded in a non-Euclidean geometric space"] the classification network is configured to process the feature vector based on an operation rule of hyperbolic space, to obtain the processing result; [part of network is for categorizing (classifies) input image features into a result (i.e. "land animals") ¶33 "using a non-Euclidean space such as, for example, a Poincaré space allows distinct classes to form in broader categories such as “plants” and “land animals.""], [uses warping (rule) ¶34 "hyperbolic embeddings provide a way to capture distances that grow exponentially through a logarithm-like warping of distance space."] obtaining a loss based on the category label and the processing result; [loss and tags ¶41 " A ranking loss algorithm can be adjusted to push similar images and tags (words) together and vice-versa."] obtaining, based on the loss, a gradient expressed in the hyperbolic space; and [loss with gradient decent in Riemannian (hyperbolic) ¶40-41 " A contrastive loss function with Riemannian SGD (Stochastic Gradient Descent) was used for the embedding"] updating the neural network based on the gradient to obtain an updated neural network. [continue training (update) for results Fig. 7 ¶41 "For training, a database with images having semantic tags may be used. Keywords from image captions may then be extracted and used as labels for training the images. A ranking loss algorithm can be adjusted to push similar images and tags (words) together and vice-versa. The mean average precision (MAP) may be output for evaluating the training results."] Sikka does not specifically teach the neural network comprises a feature extraction network and a classification network. However, Gao teaches the neural network comprises a feature extraction network and a classification network, [separate sections for classification (DNN) and feature extractions (Fig.6 602), ¶19-20 " a deep structured semantic model (DSSM) may be used to project an input item to an output item in a semantic space. For example, the input item may correspond to an input vector that represents one or more words, while the output item may correspond to a concept vector that expresses semantic information regarding the word(s)"…" a DSSM comprises a pair of DNNs, where one DNN may be used for mapping the source (e.g., text) into a semantic vector"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the models disclosed by Sikka by incorporating the neural network comprises a feature extraction network and a classification network disclosed by Gao because both techniques address the same field of data machine learning and by incorporating Gao into Sikka provide more relevant results with faster convergence [Gao ¶78] As to dependent claim 9, the rejection of claim 8 is incorporated, Sikka and Gao further teach wherein the updating the neural network based on the gradient to obtain an updated neural network comprises: updating the feature extraction network in the neural network based on the gradient, to obtain an updated feature extraction network, wherein the updated feature extraction network is configured to extract the feature vector expressed by the training data in the hyperbolic space. [Sikka hyperbolic embedding over iterations (updating) Fig. 7 "Iterations" and SGD ¶40-¶41 "Poincaré ball is a realization of hyperbolic space (open d dimensional unit ball) and, in an embodiment, the Poincaré ball can be used to model the hyperbolic embedding space"] As to dependent claim 10, the rejection of claim 8 is incorporated, Sikka and Gao further teach wherein the classification network is configured to: process the feature vector based on the operation rule of the hyperbolic space to obtain a to-be-normalized vector expressed in the hyperbolic space; and [Sikka vectors ready for normalizing and categories for classification ¶56-57, ¶40 "Hierarchies are determined by the normalization of the embedded vectors. As illustrated in a view 400A of FIG. 4A, Euclidean space does not inherently preserve hierarchies as the content is spread across a single plane"] map the to-be-normalized vector to the Euclidean space, and perform normalization processing on the to-be-normalized vector mapped to the Euclidean space, to obtain the processing result. [Sikka normalizing and Fig. 4A 400A illustrates mapping to Euclidean ¶56-57, ¶40 "400A of FIG. 4A, Euclidean space does not inherently preserve hierarchies as the content is spread across a single plane"] As to dependent claim 11, the rejection of claim 10 is incorporated, Sikka and Gao further teach wherein the obtaining a loss based on the category label and the processing result comprises: obtaining the loss based on the category label, the processing result, and a target loss function, wherein the target loss function is a function expressed in the Euclidean space. [Sikka loss and tags with standard loss ¶41, 45 " A ranking loss algorithm can be adjusted to push similar images and tags (words) together and vice-versa."] As to dependent claim 12, the rejection of claim 10 is incorporated, Sikka and Gao further teach calculating the gradient corresponding to the loss, wherein the gradient is expressed in the Euclidean space; [Gao cross-entropy SGD loss ¶76-77], [Sikka ¶36 "word2vec (Euclidean space) may be used to provide the first feature vector for text modalities. However, the inventors have found that performance may be increased by retraining the word2vec with vectors from a non-Euclidean space."] converting the gradient to a gradient expressed in the hyperbolic space; and [Sikka alter gradient for the manifold (hyperbolic) from word2vec (Euclidean) ¶36, ¶41 " The structure of loss function and the gradient descent are altered to create a linear projection layer to constrain embedding vectors to the manifold. In one example, a pre-trained word2vec model may be used and the results can be projected to the manifold via a few non-linear layers. "] updating the neural network based on the gradient expressed in the hyperbolic space. [Sikka continue training for results Fig. 7 ¶41 "For training, a database with images having semantic tags may be used. Keywords from image captions may then be extracted and used as labels for training the images. A ranking loss algorithm can be adjusted to push similar images and tags (words) together and vice-versa. The mean average precision (MAP) may be output for evaluating the training results."] As to independent claim 13, Sikka teaches a data processing apparatus, wherein the apparatus comprises a memory and a processor, the memory stores code, and the processor is configured to execute the code to perform: [apparatus, processor and memory ¶8] obtaining to-be-processed data; [text and image data ¶36 "if the multimodal content is an image with a caption, the first modality feature vector may represent a first modality (text) of the multimodal content (caption portion)"] processing the to-be-processed data using a trained neural network to obtain a processing result; and outputting the processing result, wherein [training for results Fig. 7 ¶41 "For training, a database with images having semantic tags may be used. Keywords from image captions may then be extracted and used as labels for training the images. A ranking loss algorithm can be adjusted to push similar images and tags (words) together and vice-versa. The mean average precision (MAP) may be output for evaluating the training results."] the feature extraction network is configured to extract a feature vector expressed by the to-be-processed data in hyperbolic space, and [embeds (extracts) vector into non-Euclidean (hyperbolic ¶7) space ¶37 "In block 310, the first modality feature vector of the multimodal content and the second modality feature vector of the multimodal content are semantically embedded in a non-Euclidean geometric space"] the classification network is configured to process the feature vector based on an operation rule of the hyperbolic space, to obtain the processing result. [hyperbolic network is for categorizing (classifies) input image features into a result (i.e. "land animals") ¶33 "using a non-Euclidean space such as, for example, a Poincaré space allows distinct classes to form in broader categories such as “plants” and “land animals.""], [uses warping (rule) ¶34 "hyperbolic embeddings provide a way to capture distances that grow exponentially through a logarithm-like warping of distance space."] Sikka does not specifically teach the neural network comprises a feature extraction network and a classification network. However, Gao teaches the neural network comprises a feature extraction network and a classification network, [separate sections for classification (DNN) and feature extractions (Fig.6 602), ¶19-20 " a deep structured semantic model (DSSM) may be used to project an input item to an output item in a semantic space. For example, the input item may correspond to an input vector that represents one or more words, while the output item may correspond to a concept vector that expresses semantic information regarding the word(s)"…" a DSSM comprises a pair of DNNs, where one DNN may be used for mapping the source (e.g., text) into a semantic vector"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the models disclosed by Sikka by incorporating the neural network comprises a feature extraction network and a classification network disclosed by Gao because both techniques address the same field of data machine learning and by incorporating Gao into Sikka provide more relevant results with faster convergence [Gao ¶78]. As to dependent claim 15, the rejection of claim 13 is incorporated, Sikka and Gao further teach wherein the feature extraction network comprises a first processing layer and a second processing layer; [Sikka embedding, linear and projection layers ¶40] the first processing layer is configured to process the to-be-processed data, to obtain an embedding vector represented by the to-be-processed data in the hyperbolic space; and [Sikka embeds vector into non-Euclidean (hyperbolic ¶7) space ¶37 "In block 310, the first modality feature vector of the multimodal content and the second modality feature vector of the multimodal content are semantically embedded in a non-Euclidean geometric space"] the second processing layer is configured to calculate a geometric center of the embedding vector in the hyperbolic space, to obtain the feature vector. [Sikka performs clustering to find center "K" of the vector ¶53 "allows learning of a clustering of users in addition to the user embeddings inside the same network. Learning the clusters jointly allows for a better and automatic sharing of information about similar images between user embeddings than what is available explicitly in the dataset. To this end, an additional matrix w.sub.c∈custom-character.sup.K×D is maintained, where C is the number of clusters, a hyperparameter. Each row of w.sub.C, represented by c.sub.i,l=1, 2, . . . , K, is the vector representing the cluster center of the l.sup.th cluster"] As to independent claim 16, Sikka teaches a data processing apparatus, wherein the apparatus comprises a memory and a processor, the memory stores code, and the processor is configured to execute the code to perform: [apparatus, processor and memory with instructions ¶8] obtaining training data and a corresponding category label; [images and tags/labels ¶41 " For training, a database with images having semantic tags may be used. Keywords from image captions may then be extracted and used as labels for training the images."] processing the training data by using a neural network, to obtain a processing result, wherein [training for results Fig. 7 ¶41 "For training, a database with images having semantic tags may be used. Keywords from image captions may then be extracted and used as labels for training the images. A ranking loss algorithm can be adjusted to push similar images and tags (words) together and vice-versa. The mean average precision (MAP) may be output for evaluating the training results."] the feature extraction network is configured to extract a feature vector of the training data, and [embeddings are part the network that extracts vector into non-Euclidean (hyperbolic ¶7) space ¶37 "In block 310, the first modality feature vector of the multimodal content and the second modality feature vector of the multimodal content are semantically embedded in a non-Euclidean geometric space"] the classification network is configured to process the feature vector based on an operation rule of hyperbolic space, to obtain the processing result; [part of network is for categorizing (classifies) input image features into a result (i.e. "land animals") ¶33 "using a non-Euclidean space such as, for example, a Poincaré space allows distinct classes to form in broader categories such as “plants” and “land animals.""], [uses warping (rule) ¶34 "hyperbolic embeddings provide a way to capture distances that grow exponentially through a logarithm-like warping of distance space."] obtaining a loss based on the category label and the processing result; [loss and tags ¶41 " A ranking loss algorithm can be adjusted to push similar images and tags (words) together and vice-versa."] obtaining, based on the loss, a gradient expressed in the hyperbolic space; and [loss with gradient decent in Riemannian (hyperbolic) ¶40-41 " A contrastive loss function with Riemannian SGD (Stochastic Gradient Descent) was used for the embedding"] updating the neural network based on the gradient to obtain an updated neural network. [continue training (update) for results Fig. 7 ¶41 "For training, a database with images having semantic tags may be used. Keywords from image captions may then be extracted and used as labels for training the images. A ranking loss algorithm can be adjusted to push similar images and tags (words) together and vice-versa. The mean average precision (MAP) may be output for evaluating the training results."] Sikka does not specifically teach the neural network comprises a feature extraction network and a classification network. However, Gao teaches the neural network comprises a feature extraction network and a classification network, [separate sections for classification (DNN) and feature extractions (Fig.6 602), ¶19-20 " a deep structured semantic model (DSSM) may be used to project an input item to an output item in a semantic space. For example, the input item may correspond to an input vector that represents one or more words, while the output item may correspond to a concept vector that expresses semantic information regarding the word(s)"…" a DSSM comprises a pair of DNNs, where one DNN may be used for mapping the source (e.g., text) into a semantic vector"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the models disclosed by Sikka by incorporating the neural network comprises a feature extraction network and a classification network disclosed by Gao because both techniques address the same field of data machine learning and by incorporating Gao into Sikka provide more relevant results with faster convergence [Gao ¶78] As to dependent claim 17, the rejection of claim 16 is incorporated, Sikka and Gao further teach wherein the processor is configured to obtain the code and perform: updating the feature extraction network in the neural network based on the gradient, to obtain an updated feature extraction network, wherein the updated feature extraction network is configured to extract the feature vector expressed by the training data in the hyperbolic space. [Sikka hyperbolic embedding over iterations (updating) Fig. 7 "Iterations" and SGD ¶40-¶41 "Poincaré ball is a realization of hyperbolic space (open d dimensional unit ball) and, in an embodiment, the Poincaré ball can be used to model the hyperbolic embedding space"] As to dependent claim 18, the rejection of claim 16 is incorporated, Sikka and Gao further teach wherein the classification network is configured to: process the feature vector based on the operation rule of the hyperbolic space to obtain a to-be-normalized vector expressed in the hyperbolic space; and [Sikka vectors ready for normalizing and categories for classification ¶56-57, ¶40 "Hierarchies are determined by the normalization of the embedded vectors. As illustrated in a view 400A of FIG. 4A, Euclidean space does not inherently preserve hierarchies as the content is spread across a single plane"] map the to-be-normalized vector to the Euclidean space, and perform normalization processing on the to-be-normalized vector mapped to the Euclidean space, to obtain the processing result. [Sikka normalizing and Fig. 4A 400A illustrates mapping to Euclidean ¶56-57, ¶40 "400A of FIG. 4A, Euclidean space does not inherently preserve hierarchies as the content is spread across a single plane"] Claims 3 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Sikka in view of Gao, as applied in the rejection of claim 1 and 13 above, and further in view of EL-YANIV et al. (US 20170286830 A1 hereinafter EL-YANIV) As to dependent claim 3, Sikka and Gao teach the rejection of claim 1 that is incorporated. Sikka and Gao do not specifically teach wherein the classification network comprises a plurality of neurons, each neuron is configured to process input data based on an activation function, and the activation function comprises the operation rule based on the hyperbolic space. However, EL-YANIV teaches wherein the classification network comprises a plurality of neurons, each neuron is configured to process input data based on an activation function, and the activation function comprises the operation rule based on the hyperbolic space. [neurons, layers and activation function ¶5 "each the neuron gradient is of an output of a respective the quantized activation function in one layer of the plurality of layers with respect to an input of the respective quantized activation function and is calculated such that when an absolute value of the input is smaller than a positive constant threshold value"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the modeling disclosed by Sikka and Gao by incorporating the wherein the classification network comprises a plurality of neurons, each neuron is configured to process input data based on an activation function, and the activation function comprises the operation rule based on the hyperbolic space disclosed by EL-YANIV because all techniques address the same field of machine learning and by incorporating EL-YANIV into Sikka and Gao reduces consumption of models for power efficient results [EL-YANIV ¶36]. As to dependent claim 14, Sikka and Gao teach the rejection of claim 13 that is incorporated. Sikka and Gao do not specifically teach wherein the classification network comprises a plurality of neurons, each neuron is configured to process input data based on an activation function, and the activation function comprises the operation rule based on the hyperbolic space. However, EL-YANIV teaches wherein the classification network comprises a plurality of neurons, each neuron is configured to process input data based on an activation function, and the activation function comprises the operation rule based on the hyperbolic space. [neurons, layers and activation function ¶5 "each the neuron gradient is of an output of a respective the quantized activation function in one layer of the plurality of layers with respect to an input of the respective quantized activation function and is calculated such that when an absolute value of the input is smaller than a positive constant threshold value"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the modeling disclosed by Sikka and Gao by incorporating the wherein the classification network comprises a plurality of neurons, each neuron is configured to process input data based on an activation function, and the activation function comprises the operation rule based on the hyperbolic space disclosed by EL-YANIV because all techniques address the same field of machine learning and by incorporating EL-YANIV into Sikka and Gao reduces consumption of models for power efficient results [EL-YANIV ¶36]. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Sikka in view of Gao, as applied in the rejection of claim 4 above, and further in view of Li et al. (US 8533617 B2 hereinafter Li) As to dependent claim 6, Sikka and Gao teach the rejection of claim 4 that is incorporated. Sikka and Gao further teach wherein the embedding vector is expressed based on a second conformal model; [Sikka Poincare model is conformal ¶40-41] Sikka and Gao do not specifically teach the second processing layer is configured to calculate a geometric center of the embedding vector expressed based on the second conformal model, to obtain the feature vector, wherein the second conformal model represents that the hyperbolic space is mapped to Euclidean space in a second conformal mapping manner. However, Li teaches the second processing layer is configured to calculate a geometric center of the embedding vector expressed based on the second conformal model, to obtain the feature vector, wherein the second conformal model represents that the hyperbolic space is mapped to Euclidean space in a second conformal mapping manner. [ hyperbolic center Col. 4 ln. 63-67 "hyperbolic plane 32 as a center."], [conforming with Poincare and Klein models Col. 4 ln. 25-45 "One conformal mapping is sometimes referred to as the "Poincare model." A "projective mapping" from a hyperbolic plane to a planar unit disk is a mapping that takes lines in the hyperbolic space into lines in the unit disk but distorts angles. One projective mapping is sometimes referred to as the "Klein model.""] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the modeling disclosed by Sikka and Gao by incorporating the second processing layer is configured to calculate a geometric center of the embedding vector expressed based on the second conformal model, to obtain the feature vector, wherein the second conformal model represents that the hyperbolic space is mapped to Euclidean space in a second conformal mapping manner. disclosed by Li because all techniques address the same field of machine learning and by incorporating Li into Sikka and Gao enhance the visualization of data to enable better exploration of large sets of data [Li Col. 1-2 ln. 65-11]. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. Hauck (US 20230170040 A1) teaches a classification model generates an outcome (result) with hyperbolic space (see ¶14 and ¶138) Any inquiry concerning this communication or earlier communications from the examiner should be directed to BEAU SPRATT whose telephone number is (571)272-9919. The examiner can normally be reached M-F 8:30-5 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached on 5712127212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BEAU D SPRATT/ Primary Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Dec 19, 2022
Application Filed
Mar 17, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12595715
Cementing Lab Data Validation based On Machine Learning
2y 5m to grant Granted Apr 07, 2026
Patent 12596955
REWARD FEEDBACK FOR LEARNING CONTROL POLICIES USING NATURAL LANGUAGE AND VISION DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12596956
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD FOR PRESENTING REACTION-ADAPTIVE EXPLANATION OF AUTOMATIC OPERATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12561464
CATALYST 4 CONNECTIONS
2y 5m to grant Granted Feb 24, 2026
Patent 12561606
TECHNIQUES FOR POLL INTENTION DETECTION AND POLL CREATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+26.6%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 432 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month