Prosecution Insights
Last updated: April 18, 2026
Application No. 18/323,866

DATA BOUNDARY DERIVING SYSTEM AND METHOD

Non-Final OA §101§102§103§112
Filed
May 25, 2023
Examiner
GERMICK, JOHNATHAN R
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
Simplatform Co. Ltd.
OA Round
1 (Non-Final)
47%
Grant Probability
Moderate
1-2
OA Rounds
4y 2m
To Grant
79%
With Interview

Examiner Intelligence

Grants 47% of resolved cases
47%
Career Allow Rate
43 granted / 91 resolved
-7.7% vs TC avg
Strong +32% interview lift
Without
With
+32.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
28 currently pending
Career history
119
Total Applications
across all art units

Statute-Specific Performance

§101
29.0%
-11.0% vs TC avg
§103
38.5%
-1.5% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 91 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION This action is responsive to the claims filed on 05/25/2023. Claims 1-13 are pending in the case. Claims 1, 7, and 13 are independent claims. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “sample data reception unit”, “cluster generation unit”, “probability density function derivation unit”, “learning data generation unit” in claim 1-6, and “a sample data reception step”, “a cluster generation step”, “a probability density function derivation step”, “a learning data generation step” in claims 7-13 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 5 and 11 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “representing points having regular intervals within the area” in claim 1 and 13 is a relative term which renders the claim indefinite. The term “regular” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 13 is rejected under 35 U.S.C. 101 because the claim are directed to non-statutory subject matter. The claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claim is directed transitory computer readable storage medium (i.e signals per se). Claims 1-13 is rejected under 35 U.S.C. 101 because the claim are directed to an abstract idea without significantly more. Claim 1/7/13 Under step 1, claim 1 is directed to a data boundary deriving system, which is directed to system, one of the statutory categories. Under step 1, claim 7 is directed to a boundary deriving method, which is directed to a process, one of the statutory categories. Under step 1, claim 13 is directed to a computer readable storage medium, which broadly includes signals per se, not a statutory category. Under Step 2A Prong 1, the claim recites the following limitations which are considered mental evaluations: a cluster generation unit configured to generate a plurality of clusters by classifying the plurality of pieces of sample data a probability density function derivation unit configured to derive a probability density function based on characteristic values of data included in each of the plurality of generated clusters; and a learning data generation unit configured to generate learning data by calculating values of the probability density function of a cluster including each piece of sample data for each of the plurality of sample data and labeling second sample data based on the calculated values. Each of these amount to decision about data. Clustering, derivation of a probability density function and calculating values and labeling are all activities which can be performed in the mind. Under step 2A Prong 2, The claim recites the following additional element(s): a sample data reception unit configured to receive a plurality of pieces of sample data having a plurality of characteristic values; (that amounts to adding insignificant extra-solution activity to the judicial exception, because the limitation describe mere data gathering. See MPEP 2106.05(g)) Further Under step 2B, the additional element a sample data reception unit configured to receive a plurality of pieces of sample data having a plurality of characteristic values; is well understood, routine, and conventional activity because it amounts to “transmitting or receiving data over a network" (see MPEP 2106.05(d)(II)(i) ) Claim 2/8 The claim depends on claim 1/7 Under Step 2A Prong 1, The claim recites the limitations: derive a mean value of characteristic values of sample data included in each of the plurality of clusters and a covariance matrix for all the characteristic values; and derive the probability density function using the mean value and the covariance matrix. which further describe the abstract ideas recited in the parent claims, under Step 2A Prong 1, in particular the limitations describe mental evaluations. Furthermore, under step 2A Prong 2 and 2B, the claim does not recite additional elements to consider. Claim 3/9 The claim depends on claim 2/8 Each of the limitations described in the claim, under Step 2A Prong 1, only serve to describe the abstract ideas addressed in the independent claim, in particular the limitations describe mental evaluations. Furthermore, under step 2A Prong 2 and 2B, the claim does not recite additional elements to consider. Claim 4/10 The claim depends on claim 1/7 Under Step 2A Prong 1, The claim recites the limitations: the sample data reception unit identifies outliers from the plurality of pieces of received sample data and removes the identified outliers, and wherein the cluster generation unit generates the clusters using the sample data from which the outliers have been removed. which further describe the abstract ideas recited in the parent claims, under Step 2A Prong 1, in particular the limitations describe mental evaluations. Furthermore, under step 2A Prong 2 and 2B, the claim does not recite additional elements to consider. Claim 5/11 The claim depends on claim 1/7 Under Step 2A Prong 1, The claim recites the limitations: set an area including the sample data, and selects data, representing points having regular intervals within the area, as the second sample data; and generate the learning data by labeling the second sample data which further describe the abstract ideas recited in the parent claims, under Step 2A Prong 1, in particular the limitations describe mental evaluations. Furthermore, under step 2A Prong 2 and 2B, the claim does not recite additional elements to consider. Claim 6/12 The claim depends on claim 1/7 Under Step 2A Prong 1, The claim recites the limitations: set a value, corresponding to a predetermined proportion of a peak of probability density function values of the respective pieces of second sample data, as a boundary value; and label the individual pieces of data based on the boundary value. which further describe the abstract ideas recited in the parent claims, under Step 2A Prong 1, in particular the limitations describe mental evaluations. Furthermore, under step 2A Prong 2 and 2B, the claim does not recite additional elements to consider. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-3, 7-9 and 13 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Janousek “Gaussian Mixture Model Cluster Forest” Claim 1 Janousek teaches, A data boundary deriving system comprising: (abstract “This article describes a new ensemble clustering algorithm based on CF that internally uses a probabilistic model called Gaussian Mixture Model (GMM)” a clustering algorithm is a type of boundary deriving system) a sample data reception unit configured to receive a plurality of pieces of sample data having a plurality of characteristic values; (pg 1 Section 2 “In the first step of CVG, the feature vector is initialized with n randomly selected features.” Selecting features amounts to receiving a plurality of sample data Table 1 pg 3 PNG media_image1.png 161 411 media_image1.png Greyscale the associated attributes of the features and dataset are the plurality of characteristic features) a cluster generation unit configured to generate a plurality of clusters by classifying the plurality of pieces of sample data; (pg 1 Section 2 “Then, k-means algorithm is applied and quality ρ of obtained clustering is evaluated using Formula 1, where c is result of the k-means clustering (positions of cluster centers),”) a probability density function derivation unit configured to derive a probability density function based on characteristic values of data included in each of the plurality of generated clusters; (pg 3 Section 3 “Mixture models de scribes the probability distribution of observations in the overall population. In case of GMM, this distribution is a multivariate Gaussian…. a Probability density function (PDF) of Gaussian distribution, defined as:…. GMM is often used for clustering. For this purpose, every cluster represents one sub-population… The goal of the clustering process is to estimate corresponding parameters for each cluster.” the GMM in particular is used for the stated clustering, the GMM derives the associated probability density function) and a learning data generation unit configured to generate learning data by calculating values of the probability density function of a cluster including each piece of sample data for each of the plurality of sample data and labeling second sample data based on the calculated values. ( pg 2-3 section 3 “GMM is often used for clustering. For this purpose, every cluster represents one sub-population. This sub-population is described using mean vector μ and covariance matrix Σ. The goal of the clustering process is to estimate corresponding parameters for each cluster. Particular observation is assigned to the cluster based on the probability that it has been generated from distribution is represented by the cluster. This probability is computed for each cluster using Formula 12 PNG media_image2.png 30 343 media_image2.png Greyscale ” the probability assigned to a particular observation x is the label assigned based on the calculated values of the PDF according to the GMM.) Claim 2 Janousek teaches claim 1 Janousek further teaches, derive a mean value of characteristic values of sample data included in each of the plurality of clusters and a covariance matrix for all the characteristic values and derive the probability density function using the mean value and the covariance matrix. (pg 2 Section 3 “In case of GMM, this distribution is a multivariate Gaussian. GMM is formally defined as:… M is a set of mean vectors, S is a set of covariance matrices, i-th mean vector μi ∈ M, i-th covariance matrix Σi ∈ S, and p(x|μi,Σi) is a Probability density function (PDF) of Gaussian distribution, defined as: PNG media_image3.png 68 401 media_image3.png Greyscale … The goal of the clustering process is to estimate corresponding parameters for each cluster.”) Claim 3 Janousek teaches claim 2 Janousek further teaches, wherein the probability density function derivation unit derives the probability density function by the following equation: PNG media_image4.png 37 192 media_image4.png Greyscale where: x is an n-dimensional characteristic value matrix of each piece of data; μ is an n-dimensional matrix of mean values for respective characteristics of the respective pieces of data; and Σ is the covariance matrix. (pg 2 Section 3 “In case of GMM, this distribution is a multivariate Gaussian. GMM is formally defined as:… where x is a vector of particular observation…M is a set of mean vectors, S is a set of covariance matrices, i-th mean vector μi ∈ M, i-th covariance matrix Σi ∈ S, and p(x|μi,Σi) is a Probability density function (PDF) of Gaussian distribution, defined as: PNG media_image3.png 68 401 media_image3.png Greyscale …) Claim 7 The claim is rejected for the reasons set forth in the rejection of claim 1 Further, Janousek teaches, A data boundary deriving method operating performed in a data boundary deriving system equipped with a central processing unit and memory,: (abstract “This article describes a new ensemble clustering algorithm based on CF that internally uses a probabilistic model called Gaussian Mixture Model (GMM)” pg 4 “Published results of both metrics are average values after 20 executions of algorithm with 100 final clusters. Maximal number of iterations of Expectation-Maximization algorithm was set to100 and the threshold for log-likelihood improvement was set to 0.0001” such results in the art are implemented on computer hardware at least comprising a processing unit and memory.) Claim 8-9 Claims 8-9 are rejected for the reasons set forth in the rejection of claims 2-3, respectively, in connection with claim 7 Claim 13 The claim is rejected for the reasons set forth in the rejection of claim 1 Further, Janousek teaches, A computer-readable storage medium having stored thereon a program that causes a computer to perform (abstract “This article describes a new ensemble clustering algorithm based on CF that internally uses a probabilistic model called Gaussian Mixture Model (GMM)” pg 4 “Published results of both metrics are average values after 20 executions of algorithm with 100 final clusters. Maximal number of iterations of Expectation-Maximization algorithm was set to100 and the threshold for log-likelihood improvement was set to 0.0001” such results in the art are implemented on computer hardware at least comprising a processing unit and memory.) Claim Rejections - 35 U.S.C. § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 4-6, 10-12 are rejected under 35 U.S.C. § 103 as being unpatentable over Janousek further in view of Rao “Novel Pre-processing using Outlier Removal in Voice Conversion”. Claim 4 Janousek teaches claim 1 Janousek does not explicitly teach, wherein the sample data reception unit identifies outliers from the plurality of pieces of received sample data and removes the identified outliers, and wherein the cluster generation unit generates the clusters using the sample data from which the outliers have been removed. Rao however, when addressing pre-processing using outlier detection for GMMs teaches, wherein the sample data reception unit identifies outliers from the plurality of pieces of received sample data and removes the identified outliers, and wherein the cluster generation unit generates the clusters using the sample data from which the outliers have been removed. ( pg 3 section 2.1 “To determine the outlying observations, we compare the score distance with the cut-off value… The frames that have distance more than this value are termed as outliers and hence, omitted from the training process … After the removal of outlier frames from the training dataset, we now fit a Gaussian Mixture Model (GMM)”) Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the gaussian mixture model clustering system of Janousek to comprise outlier removal prior to applying a GMM. One would have been motivated to make such a combination because both Janousek and Rao discuss fitting a GMM to sample data. Further, Rao notes “outliers affect the performance of the system as they seem to be inconsistent with the dataset. They tend to shift the mean and scatter of the data away from their ideal values” (Section 2 pg 2) Claim 5 Janousek teaches claim 1 Janousek does not explicitly teach, set an area including the sample data, and selects data, representing points having regular intervals within the area, as the second sample data; and generate the learning data by labeling the second sample data. Rao however, when addressing pre-processing using outlier detection for GMMs teaches, set an area including the sample data, and selects data, representing points having regular intervals within the area, as the second sample data; and generate the learning data by labeling the second sample data. ( pg 3 section 2.1 “To determine the outlying observations, we compare the score distance with the cut-off value… The frames that have distance more than this value are termed as outliers and hence, omitted from the training process. Since we assume our data to be normally distributed… After the removal of outlier frames from the training dataset, we now fit a Gaussian Mixture Model (GMM) We will use the state of-the-art joint density GMM[9], for mapping of source and target speaker feature vectors.” Figure 1 PNG media_image5.png 182 346 media_image5.png Greyscale the score threshold sets an area to include, and selects points within the area which are normally or regularly distributed. This data is then used to label or map source and target vectors. ) Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the gaussian mixture model clustering system of Janousek to comprise outlier removal prior to applying a GMM. One would have been motivated to make such a combination because both Janousek and Rao discuss fitting a GMM to sample data. Further, Rao notes “outliers affect the performance of the system as they seem to be inconsistent with the dataset. They tend to shift the mean and scatter of the data away from their ideal values” (Section 2 pg 2) Claim 6 Janousek teaches claim 1 Janousek does not explicitly teach, set a value, corresponding to a predetermined proportion of a peak of probability density function values of the respective pieces of second sample data, as a boundary value; and label the individual pieces of data based on the boundary value. Rao however, when addressing pre-processing using outlier detection for GMMs teaches, set a value, corresponding to a predetermined proportion of a peak of probability density function values of the respective pieces of second sample data, as a boundary value; and label the individual pieces of data based on the boundary value. ( pg 3 section 2.1 “To determine the outlying observations, we compare the score distance with the cut-off value… The frames that have distance more than this value are termed as outliers and hence, omitted from the training process. Since we assume our data to be normally distributed… After the removal of outlier frames from the training dataset, we now fit a Gaussian Mixture Model (GMM) We will use the state of-the-art joint density GMM[9], for mapping of source and target speaker feature vectors.” Figure 1 PNG media_image5.png 182 346 media_image5.png Greyscale the score threshold sets the peak distance for data to be includes in the probability distribution function of the GMM, this threshold or boundary value is used to label the individual data pieces.) Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the gaussian mixture model clustering system of Janousek to comprise outlier removal prior to applying a GMM. One would have been motivated to make such a combination because both Janousek and Rao discuss fitting a GMM to sample data. Further, Rao notes “outliers affect the performance of the system as they seem to be inconsistent with the dataset. They tend to shift the mean and scatter of the data away from their ideal values” (Section 2 pg 2) Claim 10-12 Claims 10-12 are rejected for the reasons set forth in the rejection of claims 4-6, respectively, in connection with claim 7 Conclusion Prior art not relied upon: Vlassis et al “A Greedy EM Algorithm for Gaussian Mixture Learning” teaches an expectation maximization model for a gaussian mixture model. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHNATHAN R GERMICK whose telephone number is (571)272-8363. The examiner can normally be reached M-F 9:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached on 571-272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.R.G./ Examiner, Art Unit 2122 /KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

May 25, 2023
Application Filed
Apr 03, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566962
DITHERED QUANTIZATION OF PARAMETERS DURING TRAINING WITH A MACHINE LEARNING TOOL
2y 5m to grant Granted Mar 03, 2026
Patent 12566983
MACHINE LEARNING CLASSIFIERS PREDICTION CONFIDENCE AND EXPLANATION
2y 5m to grant Granted Mar 03, 2026
Patent 12554977
DEEP NEURAL NETWORK FOR MATCHING ENTITIES IN SEMI-STRUCTURED DATA
2y 5m to grant Granted Feb 17, 2026
Patent 12443829
NEURAL NETWORK PROCESSING METHOD AND APPARATUS BASED ON NESTED BIT REPRESENTATION
2y 5m to grant Granted Oct 14, 2025
Patent 12443868
QUANTUM ERROR MITIGATION USING HARDWARE-FRIENDLY PROBABILISTIC ERROR CORRECTION
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
47%
Grant Probability
79%
With Interview (+32.1%)
4y 2m
Median Time to Grant
Low
PTA Risk
Based on 91 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month