Prosecution Insights
Last updated: April 19, 2026
Application No. 17/801,779

CLUSTERING DEVICE, CLUSTERING METHOD, AND CLUSTERING PROGRAM

Non-Final OA §101§103§112
Filed
Aug 23, 2022
Examiner
CHUANG, SU-TING
Art Unit
2146
Tech Center
2100 — Computer Architecture & Software
Assignee
Nippon Telegraph and Telephone Corporation
OA Round
1 (Non-Final)
52%
Grant Probability
Moderate
1-2
OA Rounds
4y 5m
To Grant
91%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
52 granted / 101 resolved
-3.5% vs TC avg
Strong +40% interview lift
Without
With
+39.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
28 currently pending
Career history
129
Total Applications
across all art units

Statute-Specific Performance

§101
27.4%
-12.6% vs TC avg
§103
46.3%
+6.3% vs TC avg
§102
10.8%
-29.2% vs TC avg
§112
11.7%
-28.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 101 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Claims 1-8 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on 08/23/2022 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Objections Claim 1 is objected to because of the following informalities: In claim 1, “a latent variable calculation unit, implemented with one or more processors… a number-of-clusters identification unit, implemented with one or more processors… a hyperparameter information acquisition unit, implemented with one or more processors… a clustering unit, implemented with one or more processors” should be “a latent variable calculation unit, implemented with the one or more processors… a number-of-clusters identification unit, implemented with the one or more processors… a hyperparameter information acquisition unit, implemented with the one or more processors… a clustering unit, implemented with the one or more processors” Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-8 are rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claims 1 and 7-8 recite the limitation “the latent variable being a consecutive random variable of the number of dimensions.” There is insufficient antecedent basis for the limitation “the number of dimensions” in the claim. For examination purposes examiner has interpreted “the number of dimensions” to be “a number of dimensions.” Claims 1 and 7-8 recite the limitation “identify the number of clusters.” There is insufficient antecedent basis for the limitation “the number of clusters” in the claim. For examination purposes examiner has interpreted “the number of clusters” to be “a number of clusters.” Claims 1 and 7-8 recite the limitation “when a plurality of the calculated estimated values of the latent variable.” There is insufficient antecedent basis for the limitation “the calculated estimated values” in the claim. For examination purposes examiner has interpreted “a plurality of the calculated estimated values of the latent variable” to be “a plurality of estimated values of the latent variable.” Claims 2-6 are also rejected due to their dependency on a rejected claim. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. - Claims 1-8 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more Step 1: Claims 1-6 recite an apparatus with one or more processors. Claim 7 recites a method. Claim 8 recites a non-transitory medium. Therefore, claims 1-6 are directed to a machine, claim 7 is directed to a process, and claim 8 is directed to a manufacture. With respect to claim 1 and 7-8: 2A Prong 1: The claim recites a judicial exception. calculate, from the sensor data, an estimated value of the latent variable from which the sensor data is generated (mental process – evaluation or judgement) identify the number of clusters when a plurality of the calculated estimated values of the latent variable are clustered (mental process – evaluation or judgement) cluster the sensor data… by using the acquired hyperparameter information and the identified number of clusters (mental process – evaluation or judgement) 2A Prong 2: The judicial exception is not integrated into a practical application. (claim 1) a model construction unit, implemented with one or more processors, configured to … a latent variable calculation unit, implemented with one or more processors, configured to… a number-of-clusters identification unit, implemented with one or more processors, configured to… a hyperparameter information acquisition unit, implemented with one or more processors, configured to… a clustering unit, implemented with one or more processors, configured to (claim 7) executed by a clustering apparatus comprising one or more processors (claim 8) storing one or more instructions, that upon execution, cause a computer system to perform operation (mere instructions to apply an exception – MPEP 2106.05(f), (2) invoking generic computer components) construct, on assumption that sensor data is generated from a latent variable, a model for estimating the latent variable from the sensor data, based on a generative model for generating the sensor data from the latent variable, the latent variable being a consecutive random variable of the number of dimensions suitable for handling in unsupervised learning or a neural network having two or less layers (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception. Construct a model based on a generative model.) by using the constructed model… by the unsupervised learning or the neural network having two or less layers… by a neural network having three or more layers… (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception) acquire hyperparameter information of the constructed model (insignificant extra-solution activity – MPEP 2106.05(g), (3) data gathering and outputting) Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea. 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. (claim 1) a model construction unit, implemented with one or more processors, configured to … a latent variable calculation unit, implemented with one or more processors, configured to… a number-of-clusters identification unit, implemented with one or more processors, configured to… a hyperparameter information acquisition unit, implemented with one or more processors, configured to… a clustering unit, implemented with one or more processors, configured to (claim 7) executed by a clustering apparatus comprising one or more processors (claim 8) storing one or more instructions, that upon execution, cause a computer system to perform operation (mere instructions to apply an exception – MPEP 2106.05(f), (2) invoking generic computer components) construct, on assumption that sensor data is generated from a latent variable, a model for estimating the latent variable from the sensor data, based on a generative model for generating the sensor data from the latent variable, the latent variable being a consecutive random variable of the number of dimensions suitable for handling in unsupervised learning or a neural network having two or less layers (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception. Construct a model based on a generative model) by using the constructed model… by the unsupervised learning or the neural network having two or less layers… by a neural network having three or more layers… (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception) acquire hyperparameter information of the constructed model (insignificant extra-solution activity – MPEP 2106.05(g), (3) data gathering and outputting, and WURC: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 - MPEP 2106.05(d)(II)(i)) Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. With respect to claim 2: 2A Prong 2: The judicial exception is not integrated into a practical application. wherein the sensor data is one or more of physiological data of a human body, acceleration data indicating a movement of a human body, and rotation amount data indicating a movement of a human body (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception) Claim 1 recites constructing a model based for generating the sensor data from the latent variable, which is mere instructions to apply an exception. Specifying the type of sensor data is not indicative of integration into a practical application, the claim is directed to an abstract idea. 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. wherein the sensor data is one or more of physiological data of a human body, acceleration data indicating a movement of a human body, and rotation amount data indicating a movement of a human body (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception) Claim 1 recites constructing a model based for generating the sensor data from the latent variable, which is mere instructions to apply an exception. Specifying the type of sensor data does not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. With respect to claim 3: 2A Prong 2: The judicial exception is not integrated into a practical application. wherein the consecutive random variable is a random variable in accordance with normal distribution (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception) Claim 1 recites constructing a model based for generating the sensor data from the latent variable being consecutive, which is mere instructions to apply an exception. The specifics of the latent variable are not indicative of integration into a practical application, the claim is directed to an abstract idea. 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. wherein the consecutive random variable is a random variable in accordance with normal distribution (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial ex---ception) Claim 1 recites constructing a model based for generating the sensor data from the latent variable being consecutive, which is mere instructions to apply an exception. The specifics of the latent variable do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. With respect to claim 4: 2A Prong 2: The judicial exception is not integrated into a practical application. wherein the generative model is a neural network trained using the unsupervised learning to generate the sensor data from the latent variable (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception) Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea. 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. wherein the generative model is a neural network trained using the unsupervised learning to generate the sensor data from the latent variable (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception) Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. With respect to claim 5: 2A Prong 2: The judicial exception is not integrated into a practical application. wherein the neural network is either a Generative Adversarial Networks (GAN) or a Variational AutoEncoder (VAE) (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception) Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea. 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. wherein the neural network is either a Generative Adversarial Networks (GAN) or a Variational AutoEncoder (VAE) (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception) Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. With respect to claim 6: 2A Prong 2: The judicial exception is not integrated into a practical application. wherein the number-of-clusters identification unit applies an elbow method to the unsupervised learning or the neural network having two or less layers, in identifying the number of clusters of the sensor data (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception) Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea. 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. wherein the number-of-clusters identification unit applies an elbow method to the unsupervised learning or the neural network having two or less layers, in identifying the number of clusters of the sensor data (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception) Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2 and 4-8 rejected under 35 U.S.C. 103 as being unpatentable over Loza ("Discrimination of movement-related cortical potentials exploiting unsupervised learned representations from ECoGs" 20191122) in view of Hu ("Duplex Generative Adversarial Network for Unsupervised Domain Adaptation" 20180618) In regard to claims 1 and 7-8, Loza teaches: A clustering apparatus comprising: implemented with one or more processors,… implemented with one or more processors,… implemented with one or more processors,… implemented with one or more processors,… implemented with one or more processors,… (Loza, p. 2, 1. Introduction "Considering the disadvantages of both invasive and non-invasive BCIs and keeping in mind the ultimate aim of designing a durable, fully-implantable BCI system, many research groups have suggested Electrocorticogram (ECoG) as a more practical solution."; p. 13, 5.2. Analysis of Results "the MATLAB code corresponding to the proposed methods are available at https://github.com/carlosloza/EEGMDL."; the BCI (Brain–Computer Interfaces) system and code inherently teach all the general computer components) PNG media_image1.png 831 651 media_image1.png Greyscale a model construction unit,... configured to construct, on assumption that sensor data is generated from a latent variable, (Loza, p. 4, 3.1. Generative Model for ECoG "The phasic event component is modeled taking inspiration from the shot noise model (Davenport and Root, 1958). y(t) is the result of a Temporal Marked Point Process (TMPP) with timings τ and marks (features) α and ω activating filters, d, over time:... (2) where D = {d_w} w = 1..K is a set of filters, kernels or atoms known as dictionary [latent variable]... Figure 2 illustrates the encoding from TMPP samples to noisy single-channel, bandpassed ECoG trace. [sensor data]"; see Fig. 2, ECoG trace y~(t) [sensor data] is generated from the dictionary D [a latent variable]) a model for estimating the latent variable from the sensor data, based on a generative model for generating the sensor data from the latent variable, the latent variable being a consecutive random variable of the number of dimensions (Loza, p. 4, 3.1. Generative Model for ECoG "Given an ensemble of single-channel ECoG recordings, {y~i(t) i = 1..N}. [from the sensor data] Learning on the model [constructing a model] implies estimating the dictionary D [for estimating the latent variable] whose elements, in general, are not restricted in duration—they represent bases from vector spaces of different dimensions. [the latent variable, a consecutive random variable of the number of dimensions suitable] On the other hand, inference or encoding is posed as learning the set of timings and marks of the TMPP, i.e., sampling from a point process."; p. 5, Figure 2 "Generative model [a generative model] for ECoG. A bandpassed, single–channel, single–trial ECoG trace, ỹ(t), is modeled as the noisy addition of weighted, scale–specific, shifted filters over time."; see Fig. 2, ECoG trace y~(t) [sensor data] is generated from the dictionary D [a latent variable]) suitable for handling in unsupervised learning or a neural network having two or less layers; (Loza, p. 4, 3.1. Generative Model for ECoG "Estimating ΘY, then, can be posed as a case of unsupervised representation learning for ECoG [unsupervised learning] (Bengio et al., 2013). The shallow generative framework and physiological-based constraints of the model guarantee that the learned dictionary and densities of timings, marks, and representations lead to meaningful and interpretable encoding mechanisms of the network...") a latent variable calculation unit,… configured to calculate, from the sensor data, an estimated value of the latent variable from which the sensor data is generated, by using the constructed model; (Loza, p. 5, 3.2. Learning on the Model "Estimating the latent variables [an estimated value of the latent variable] of this type of generative models [the constructed model] usually falls into two categories depending whether the sources are explicitly estimated or not during learning… The alternative approach (adopted here) is to exploit block coordinate descent optimization to iteratively estimate the sources while keeping the filters fixed, and then, learn the dictionary atoms while keeping {τ, α, ω} fixed... For our case, learning takes place in two very distinctive sequential stages: discrimination between dynamical regimes and hierarchical partitioning of the data (Figure 3)."; see Fig. 3, D [an estimated value of the latent variable] is calculated from y~(t) [the sensor data]) a number-of-clusters identification unit,... configured to identify the number of clusters when a plurality of the calculated estimated values of the latent variable are clustered by the unsupervised learning or the neural network having two or less layers; (Loza, p. 6, 3.2.1. Discrimination of Dynamical Regimes: From Traces to M–Snippets "We exploit the parsimony principles of Minimum Description Length (MDL) coding to build a hierarchical partitioning in RM… The MDL principle is invoked to cluster reoccurring patterns embedded in the columns of X... We exploit a cost function based on bit level representations to decide among three basic clustering operations: creating a cluster, adding a subsequence to an existing cluster, and merging clusters... The proposed algorithm alternatively estimates the TMPP marks and learns bases from vector spaces of different dimensions... the proposed clustering technique greedily selects the number of clusters, K, [identify the number of clusters] needed..."; p. 6, Figure 3 "MDL–based hierarchical clustering estimates TMPP timings and marks as well as bases from vector spaces of different dimensions, D. [a plurality of the calculated estimated values of the latent variable are clustered]"; p. 4, 3.1. Generative Model for ECoG "Estimating ΘY, then, can be posed as a case of unsupervised representation learning for ECoG [unsupervised learning] (Bengio et al., 2013). The shallow generative framework and physiological-based constraints of the model guarantee that the learned dictionary and densities of timings, marks, and representations lead to meaningful and interpretable encoding mechanisms of the network...") PNG media_image2.png 568 1343 media_image2.png Greyscale Loza does not teach, but Hu teaches: a hyperparameter information acquisition unit,... configured to acquire hyperparameter information of the constructed model; and (Hu, p. 1500, 3. Method, "In summary, the objective function of the encoder and generator is formulated as below: L_G = … H(Ds(G(E(xs),t))y~st + α∥G(E(xs),−xs∥22… + H(Ds(G(E(xt,s)),y~st + α∥G(E(xt),−xt∥22).(7) where H(⋅,⋅) is the cross entropy loss used in softmax layer, and α is a balance parameter [e.g. hyperparameter information] for the two terms.... In summary, the objective function of the duplex discriminators Ds and Dt is formulated as below: L_D = ... (14)... For the categorial classification, a classifier C is established on the latent representation z and its objective function is as follows: L_C = ... (15)... The overall objective function can be formulated as follows: L = ... (LG+LD+βLC) (16) where β is a balance parameter. [e.g. hyperparameter information]") a clustering unit,... configured to cluster the sensor data by a neural network having three or more layers, by using the acquired hyperparameter information and the identified number of clusters. (Hu, p. 1498, 1. Introduction "In addition, either discriminator [by a neural network] is not only responsible for the real/fake discrimination to restrict the images from the generator to be real, but also the categorial classification for real images [cluster the sensor data] to enforce the latent representation domain invariant and preserve its category information. To do the final classification, a classifier is established on the latent representation, which can be also used to predict the labels [the identified number of clusters] of target domain images further used in the training stage."; p. 1500, Figure 2 "The duplex discriminators Ds and Dt [by a neural network] aiming for distinguishing the generated images xts/xst from the real images xs/xt and categorizing xs and xt [cluster the sensor data] as well. To do the final classification, a classifier C is built based on the latent representation z, expecting z to preserve both the common feature of source and target domains and category information."; p. 1500, 3. Method "the pseudo categorial labels [the identified number of clusters] predicted from the classifier C are used when optimizing Dt."; see Fig. 2 Duplex Discriminator D have 4-5 layers, [by a neural network having three or more layers] and see the above limitation, the whole model is trained based on α and β, therefore categorizing images is done by using α and β, i.e. by using [the acquired hyperparameter information]) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Loza to incorporate the teachings of Hu by including unsupervised training using a GAN model, to attain domain transformation for classification. Doing so would achieve the state-of-the-art performance on unsupervised domain adaptation for classification. (Hu, p. 1498, Abstract "this work proposes a novel GAN architecture with duplex adversarial discriminators (referred to as DupGAN), which can achieve domain-invariant representation and domain transformation... Our proposed work achieves the state-of-the-art performance on unsupervised domain adaptation of digit classification and object recognition.") Claims 7-8 recite substantially the same limitation as claim 1, therefore the rejection applied to claim 1 also apply to claims 7-8. In addition, Loza teaches: (claim 7) A clustering method executed by a clustering apparatus comprising one or more processors (claim 8) A non-transitory, computer-readable medium storing one or more instructions, that upon execution, cause a computer system to perform operations comprising (Loza, p. 2, 1. Introduction "Considering the disadvantages of both invasive and non-invasive BCIs and keeping in mind the ultimate aim of designing a durable, fully-implantable BCI system, many research groups have suggested Electrocorticogram (ECoG) as a more practical solution."; p. 13, 5.2. Analysis of Results "the MATLAB code corresponding to the proposed methods are available at https://github.com/carlosloza/EEGMDL."; the BCI (Brain–Computer Interfaces) system and code inherently teach all the general computer components) In regard to claim 2, Loza teaches: wherein the sensor data is one or more of physiological data of a human body, acceleration data indicating a movement of a human body, and rotation amount data indicating a movement of a human body. (Loza, p. , 1. Introduction "many research groups have suggested Electrocorticogram (ECoG) [physiological data of a human body] as a more practical solution.") In regard to claim 4, Loza does not teach, but Hu teaches: wherein the generative model is a neural network trained using the unsupervised learning to generate the sensor data from the latent variable. (Hu, p. 1498, Abstract "Following the similar idea of GAN, this work proposes a novel GAN architecture with duplex adversarial discriminators (referred to as DupGAN) [a neural network], which can achieve domain-invariant representation and domain transformation. Specifically, our proposed network consists of three parts, an encoder, a generator and two discriminators. The encoder embeds samples from both domains into the latent representation, and the generator decodes the latent representation to both source and target domains [generate the sensor data from the latent variable] respectively conditioned on a domain code, i.e., achieves domain transformation... Our proposed work achieves the state-of-the-art performance on unsupervised domain adaptation... [the unsupervised learning"; p. 1500, Figure 2 "The generator decodes z into source and target domain images respectively"; see Fig. 2, generator G generating images from latent representation z) The rationale for combining the teachings of Loza and Hu is the same as set forth in the rejection of claim 1. In regard to claim 5, Loza does not teach, but Hu teaches: wherein the neural network is either a Generative Adversarial Networks (GAN) or a Variational AutoEncoder (VAE). (Hu, p. 1498, Abstract "Following the similar idea of GAN, this work proposes a novel GAN architecture [a Generative Adversarial Networks (GAN)] with duplex adversarial discriminators (referred to as DupGAN) [a neural network], which can achieve domain-invariant representation and domain transformation.") The rationale for combining the teachings of Loza and Hu is the same as set forth in the rejection of claim 1. In regard to claim 6, Loza teaches: wherein the number-of-clusters identification unit applies an elbow method to the unsupervised learning or the neural network having two or less layers, in identifying the number of clusters of the sensor data. (Loza, p. 6, 3.2.2. Learning Bases of Different Dimensions "After X is computed, the naive solution to extract centers of mass in RM would involve classic static clustering algorithms, e.g., k-means. [an elbow method]"; in light of specification [0025] "Note that the elbow method is, for example, a method in which the K-means method"; k-means is an unsupervised learning algorithm) Claim 3 rejected under 35 U.S.C. 103 as being unpatentable over Loza and Hu applied to claim 1, and in further view of Scholler ("Sparse Approximations for Drum Sound Classification" 20110704) In regard to claim 3, Loza and Hu do not teach, but Scholler teaches: wherein the consecutive random variable is a random variable in accordance with normal distribution. (Scholler, p. 933, I. Introduction "The complete set of the elementary atomic functions is called a dictionary. In sparse coding, overcomplete dictionaries are normally used."; p. 934, Methods "Matching Pursuit (MP) [22] is a method to derive a sparse approximation of a signal using a dictionary of atomic functions... We employed this sparse coding optimization method of [4] to obtain sparse atomic functions for a dataset of drum phrases... where N is the total number of atomic functions (the size of the dictionary)... After initializing the atomic functions randomly with a normal distribution, learning is done in an iterative manner.") I It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Loza and Hu to incorporate the teachings of Scholler by including atomic functions learned in an unsupervised manner. Doing so would optimize the length as well as the shape of the atoms. (Scholler, p. 933, Abstract "we present a biologically inspired three-step process for audio classification: 1) Efficient atomic functions are learned in an unsupervised manner on mixtures of percussion sounds (drum phrases), optimizing the length as well as the shape of the atoms.") Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Gu ("EEG-Based Brain-Computer Interfaces (BCIs): A Survey of Recent Studies on Signal Sensing Technologies and Computational Intelligence Approaches and Their Applications" 20200128) teaches (Gu, p. 12 "The significance of applying GAN for EEG is that it could address the major practical issue of insufficient training data."; p. 14, "As introduced in the previous section, GAN, combined with transfer learning, could be rewarding in restraining domain divergence to improve domain adaptation [155] [156]. Hu et al. [157] proposed DupGAN, a GAN framework with one encoder, one generator and two adversarial discriminators, to attain domain transformation for classification.") Any inquiry concerning this communication or earlier communications from the examiner should be directed to SU-TING CHUANG whose telephone number is (408)918-7519. The examiner can normally be reached Monday - Thursday 8-5 PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed can be reached at (571) 272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SU-TING CHUANG/Examiner, Art Unit 2146
Read full office action

Prosecution Timeline

Aug 23, 2022
Application Filed
Nov 29, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561600
LINEAR TIME ALGORITHMS FOR PRIVACY PRESERVING CONVEX OPTIMIZATION
2y 5m to grant Granted Feb 24, 2026
Patent 12518154
TRAINING MULTIMODAL REPRESENTATION LEARNING MODEL ON UNNANOTATED MULTIMODAL DATA
2y 5m to grant Granted Jan 06, 2026
Patent 12481725
SYSTEMS AND METHODS FOR DOMAIN-SPECIFIC ENHANCEMENT OF REAL-TIME MODELS THROUGH EDGE-BASED LEARNING
2y 5m to grant Granted Nov 25, 2025
Patent 12468951
Unsupervised outlier detection in time-series data
2y 5m to grant Granted Nov 11, 2025
Patent 12412095
COOPERATIVE LEARNING NEURAL NETWORKS AND SYSTEMS
2y 5m to grant Granted Sep 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
52%
Grant Probability
91%
With Interview (+39.7%)
4y 5m
Median Time to Grant
Low
PTA Risk
Based on 101 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month