Prosecution Insights
Last updated: April 19, 2026
Application No. 18/225,198

METHOD TO GENERATE TASKS FOR META-LEARNING

Non-Final OA §101§103
Filed
Jul 24, 2023
Examiner
GRUSZKA, DANIEL PATRICK
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
32 currently pending
Career history
32
Total Applications
across all art units

Statute-Specific Performance

§101
38.3%
-1.7% vs TC avg
§103
42.3%
+2.3% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
7.4%
-32.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status This Non-Final communication is in response to Application No. 18/ 225,198 filed 07 / 24 /2023. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. BR10 2023 000929 8 , filed on 1/18/2023 . Claim Objections Claim 1 is objected to because of the following informalities: Line 12 has “ kx ”. Examiner is assuming it should be k x . Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1- 9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. 101 Subject Matter Eligibility analysis Step 1: Claims 1-9 are within the four statutory categories (a process, machine, manufacture or composition of matter.) Claims 1-9 describe a process. With respect to claim 1: Step 2A Prong 1: The claim recites an abstract idea enumerated in the 2019 PEG. representing each point from a set of n labeled samples S and a reference dataset of labeled points R in a feature domain as: R={ r 1 , r 2 ,…, r α } and S = { s 1 , s 2 ,., s n } ; (this is an abstract idea of a “mathematical concept”. The recited “ R ” and “S” represent mathematical groups that would fall under the “mathematical concepts” grouping.) estimating a probability density function (PDF) of a distance distribution of samples from R, which have a same label k x , to a centroid in the feature domain, respectively, where 1 ≤x≤ l ; (this is an abstract idea of a “mathematical concept”. The recited “ probability density function ” represents a mathematical operation that would fall under the “mathematical concepts” grouping.) grouping the samples in S based on a label kx in x groups G, respectively, where G = {G1, G2, ..., Gx}; (This is an abstract idea of a "Mental Process." The " grouping " step under its broadest reasonable interpretation, covers concepts that can be practically performed by a human using a pen and paper.) from each group G, drawing β samples from S as per the PDF estimated from R; and (this is an abstract idea of a “mathematical concept”. The recited “ PDF ” represents a mathematical operation that would fall under the “mathematical concepts” grouping.) grouping all l*β samples into a new task T, wherein β is a user-defined parameter representing a number of samples per label that compose an output task. (This is an abstract idea of a "Mental Process." The " grouping " step under its broadest reasonable interpretation, covers concepts that can be practically performed by a human using a pen and paper.) Step 2 A Prong 2: claim 1 does not recite any additional elements and thus cannot be integrated into a practical application. Step 2B: claim 1 does not recite an additional element. Therefore, claim 1 is ineligible. With respect to claim 2 : Step 2A Prong 1: claim 2, which incorporates the rejection of claim 1, recites an additional abstract idea: each task T corresponds to a set of n labeled points { p 1 , p 2 ,…, p n } and each point p k x is associated with a label k x ∈{ k 1 , k 2 ,…, k l } where 1≤x≤l (this is an abstract idea of a “mathematical concept”. The recited points and label represents a mathematical group that would fall under the “mathematical concepts” grouping.) Step 2 A Prong 2: claim 2 does not recite any additional elements and thus cannot be integrated into a practical application. Step 2B: claim 2 does not recite an additional element. Therefore, claim 2 is ineligible. With respect to claim 3 : Step 2A Prong 1: claim 3 , which incorporates the rejection of claim 1 , recites an additional abstract idea: a difficulty of task T is related to the distance distribution of points with same label k x to a centroid of all samples with label k x ,∀x∈ 1,1 (this is an abstract idea of a “mathematical concept”. The recited “ distance distribution ” represents a statistical concept that would fall under the “mathematical concepts” grouping.) Step 2 A Prong 2: claim 3 does not recite any additional elements and thus cannot be integrated into a practical application. Step 2B: claim 3 does not recite an additional element. Therefore, claim 3 is ineligible. With respect to claim 4 : Step 2A Prong 1: claim 4, which incorporates the rejection of claim 1, does not recite an abstract idea. Step 2 A Prong 2: The judicial exception is not integrated into a practical application. The method is applied to binary classification tasks so as to allow classification of content of an input audio segment as a target spoken keyword (INV) or as a non-target keyword (OOV) (This amounts to no more than mere instructions to “apply” the exception using a generic computer component.) Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional element is recited in a generic level and they represent generic computer components to apply the abstract idea. Mere instructions to apply an exception cannot provide an inventive concept (MPEP 2106.05(f)). Therefore, claim 4 is ineligible. With respect to claim 5 : Step 2A Prong 1: claim 5, which incorporates the rejection of claim 4, recites an additional abstract idea: representing each data- point from a set of n labeled samples S and a reference dataset of labeled points R in a feature domain as: R={ r 1 , r 2 ,…, r α } and S = { s 1 , s 2 ,., s n } ; (this is an abstract idea of a “mathematical concept”. The recited “ R ” and “S” represent mathematical groups that would fall under the “mathematical concepts” grouping.) estimating a probability density function (PDF) of a distance distribution of samples from R, which do not have a label k x , to a centroid of all samples from R with label k x in the feature domain, respectively, where 1 ≤x≤ l ; (this is an abstract idea of a “mathematical concept”. The recited “ probability density function ” represents a mathematical operation that would fall under the “mathematical concepts” grouping.) grouping the samples in S based on a label kx in x groups G, respectively, where G = {G1, G2, ..., Gx}; (This is an abstract idea of a "Mental Process." The " grouping " step under its broadest reasonable interpretation, covers concepts that can be practically performed by a human using a pen and paper.) computing, from all groups G, where G ≠Gy , a distance between each of the samples and the centroid of Gy; (this is an abstract idea of a “mathematical concept”. The recited “ computing ” represents a mathematical calculation that would fall under the “mathematical concepts” grouping.) drawing β samples from all groups G, where G ≠Gy , as per the PDF estimated from R; (this is an abstract idea of a “mathematical concept”. The recited “ PDF ” represents a mathematical operation that would fall under the “mathematical concepts” grouping.) grouping all l*β samples into a new task T (This is an abstract idea of a "Mental Process." The " grouping " step under its broadest reasonable interpretation, covers concepts that can be practically performed by a human using a pen and paper.) Step 2 A Prong 2: The judicial exception is not integrated into a practical application. Corresponding each task T to a keyword-spotting (KWS) problem, each task T having a positive class called target spoken keyword (INV) and a negative class called non-target keywork (OOV); (this limitation amounts to adding insignificant extra-solution activity to the judicial exception). Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional element s add insignificant extra-solution activity to the judicial exception and cannot provide an inventive concept. Storing and retrieving information in memory is directed to a well understood routine conventional activity of data transmission (MPEP 2106.05(d)(II)(iv)) . Therefore, claim 5 is ineligible. With respect to claim 6 : Step 2A Prong 1: claim 6, which incorporates the rejection of claim 4, does not recite an abstract idea. Step 2 A Prong 2: The judicial exception is not integrated into a practical application. all INV from other tasks correspond to keywords that are different from the tasks in task T. (this limitation amounts to adding insignificant extra-solution activity to the judicial exception). Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional element s add insignificant extra-solution activity to the judicial exception and cannot provide an inventive concept. Storing and retrieving information in memory is directed to a well understood routine conventional activity of data transmission (MPEP 2106.05(d)(II)(iv)) . Therefore, claim 6 is ineligible. With respect to claim 7 : Step 2A Prong 1: claim 7, which incorporates the rejection of claim 4, does not recite an abstract idea. Step 2 A Prong 2: The judicial exception is not integrated into a practical application. a target spoken keyword (INV) represents keywords inside a vocabulary. (this limitation amounts to adding insignificant extra-solution activity to the judicial exception). Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional element s add insignificant extra-solution activity to the judicial exception and cannot provide an inventive concept. Storing and retrieving information in memory is directed to a well understood routine conventional activity of data transmission (MPEP 2106.05(d)(II)(iv)) . Therefore, claim 7 is ineligible. With respect to claim 8 : Step 2A Prong 1: claim 8, which incorporates the rejection of claim 4, does not recite an abstract idea. Step 2 A Prong 2: The judicial exception is not integrated into a practical application. a non-target keyword (OOV) represents keywords out of a vocabulary. (this limitation amounts to adding insignificant extra-solution activity to the judicial exception). Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional element s add insignificant extra-solution activity to the judicial exception and cannot provide an inventive concept. Storing and retrieving information in memory is directed to a well understood routine conventional activity of data transmission (MPEP 2106.05(d)(II)(iv)) . Therefore, claim 8 is ineligible. With respect to claim 9 : Step 2A Prong 1: claim 9, which incorporates the rejection of claim 4, does not recite an abstract idea. Step 2 A Prong 2: The judicial exception is not integrated into a practical application. the INV samples for each task are chosen based on available audio segments. (this limitation amounts to adding insignificant extra-solution activity to the judicial exception). Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional element s add insignificant extra-solution activity to the judicial exception and cannot provide an inventive concept. Storing and retrieving information in memory is directed to a well understood routine conventional activity of data transmission (MPEP 2106.05(d)(II)(iv)) . Therefore, claim 9 is ineligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1- 3 are rejected under 35 U.S.C. 103 as being unpatentable over Snell (NPL: “Prototypical Networks for Few-shot Learning”) in view of Kamalov (NPL: “Kernel density estimation based sampling for imbalanced class distribution”) Regarding claim 1, Snell teaches: representing each point from a set of n labeled samples S and a reference dataset of labeled points R in a feature domain as: R={ r 1 , r 2 ,…, r α } and S = { s 1 , s 2 ,., s n } ; (Section 2 Prototypical Networks subsection 2.1 notation “ In few-shot classification we are given a small support set of N labeled examples S = {(x1,y1),...,( xN,yN )} where each xi ∈ RD is the D-dimensional feature vector of an example and yi ∈ {1,...,K} is the corresponding label. ”) estimating a probability density function (PDF) of a distance distribution of samples from R, which have a same label k x , to a centroid in the feature domain, respectively, where 1 ≤x≤ l ; (Snell only teaches the centroid: Section 2.2 Model where equation 1 is teaching the centroid “ Each prototype is the mean vector of the embedded support points belonging to its class ”) grouping the samples in S based on a label kx in x groups G, respectively, where G = {G1, G2, ..., Gx }; ( Section 2.1 Notation “ Sk denotes the set of examples labeled with class k. ” ) from each group G, drawing β samples from S as per the PDF estimated from R; and grouping all l*β samples into a new task T, wherein β is a user-defined parameter representing a number of samples per label that compose an output task. (Section 1 Introduction “ Notably, this model utilizes sampled mini-batches called episodes during training, where each episode is designed to mimic the few-shot task by subsampling classes as well as data points. ” And section 2.6 Design choices “ A straightforward way to construct episodes, used in Vinyals et al. [32] and Ravi and Larochelle [24], is to choose Nc classes and NS support points per class in order to match the expected situation at test-time. ”) Snell does not teach: estimating a probability density function (PDF) of a distance distribution of samples from R, which have a same label k x , to a centroid in the feature domain, respectively, where 1 ≤x≤ l ; from each group G, drawing β samples from S as per the PDF estimated from R; and However, Kamalov does : estimating a probability density function (PDF) of a distance distribution of samples from R, which have a same label k x , to a centroid in the feature domain, respectively, where 1 ≤x≤ l ; (Section 3. KDE sampling “ Nonparametric density estimation is an important tool in statistical data analysis. It is used to model the distribution of a variable based on a random sample. The resulting density function can be utilized to investigate various properties of the variable. Let { x 1 , x 2 , . . . , x n } be an i.i.d. sample drawn from an unknown probability density function f. Then the kernel density estimate of f is given by ”) from each group G, drawing β samples from S as per the PDF estimated from R; and (Section 3. KDE Sampling: Equation 6 shoes this “ Given a sample { x 1 , x 2 , . . . x n } of d -dimensional random sample vectors drawn from a distribution described by a density function f the kernel density estimate is defined to be [Equation 6]”) Snell and Kamalov are considered analogous art to the claimed invention because they are in the same field of endeavor being sampling data . It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the prototypical network of Snell with kernel density estimation sampling of Kamalov . One would want to do this to generate new sample points ( Kamalov introduction ). Regarding claim 2 , Snell in view of Kamalov teaches claim 1 as outlined above. Snell further teaches: each task T corresponds to a set of n labeled points { p 1 , p 2 ,…, p n } and each point p k x is associated with a label k x ∈{ k 1 , k 2 ,…, k l } where 1≤x≤l (Section 2.2 model “ Training episodes are formed by randomly selecting a subset of classes from the training set, then choosing a subset of examples within each class to act as the support set and a subset of the remainder to serve as query points. ”). Regarding claim 3 , Snell in view of Kamalov teaches claim 1 as outlined above. Snell further teaches: a difficulty of task T is related to the distance distribution of points with same label k x to a centroid of all samples with label k x ,∀x∈ 1,1 (Section 1 Introduction “ In particular, we relate Prototypical Networks to clustering [4] in order to justify the use of class means as prototypes when distances are computed with a Bregman divergence, such as squared Euclidean distance. We find empirically that the choice of distance is vital, as Euclidean distance greatly outperforms the more commonly used cosine similarity. On several benchmark tasks, we achieve state-of-the-art performance. Prototypical Networks are simpler and more efficient than recent meta-learning algorithms, making them an appealing approach to few-shot and zero-shot learning ”) Claims 4-9 are rejected under 35 U.S.C. 103 as being unpatentable over Snell in view of Kamalov and Parnami (NPL: “Few-Shot Keyword Spotting with Prototypical Networks”) Regarding claim 4 , Snell in view of Kamalov teaches claim 1 as outlined above. Neither Snell nor Kamalov teaches: The method is applied to binary classification tasks so as to allow classification of content of an input audio segment as a target spoken keyword (INV) or as a non-target keyword (OOV) However, Parnami does: The method is applied to binary classification tasks so as to allow classification of content of an input audio segment as a target spoken keyword (INV) or as a non-target keyword (OOV) (Introduction: “ Due to the data hungry nature of DNNs, recently the field of Few-Shot Learning has emerged as a solution to address the issue. Specifically, Few-Shot Classification (FSC) [4] aims to learn a classifier that can recognize new classes (not seen during training) when given limited, labeled examples for each new class. ”) Snell, Kamalov and Parnami are considered analogous art to the claimed invention because they are in the same field of endeavor being sampling data and meta learning . It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the prototypical network of Snell with kernel density estimation sampling of Kamalov and with the keyword problem solving of Parnami . One would want to do this to solve keyword recognition problems ( Parnami introduction ). Regarding claim 5 , Snell in view of Kamalov and Parnami teaches claim 4 as outlined above. Snell further teaches: representing each data-point from a set of n labeled samples S and a reference dataset of labeled points R in a feature domain as: R={ r 1 , r 2 ,…, r α } and S = { s 1 , s 2 ,., s n } ; (Section 2 Prototypical Networks subsection 2.1 notation “ In few-shot classification we are given a small support set of N labeled examples S = {(x1,y1),...,( xN,yN )} where each xi ∈ RD is the D-dimensional feature vector of an example and yi ∈ {1,...,K} is the corresponding label. ”) estimating a probability density function (PDF) of a distance distribution of samples from R, which do not have a label k x , to a centroid of all samples from R with label k x in the feature domain, respectively, where 1 ≤x≤ l ; (Snell only teaches the centroid: Section 2.2 Model where equation 1 is teaching the centroid “ Each prototype is the mean vector of the embedded support points belonging to its class ”) grouping the samples in S based on a label kx in x groups G, respectively, where G = {G1, G2, ..., Gx}; (Section 2.1 Notation “ Sk denotes the set of examples labeled with class k. ”) computing, from all groups G, where G ≠Gy , a distance between each of the samples and the centroid of Gy; (2.2 Model “ Given a distance function … Prototypical Networks produce a distribution over classes for a query point x based on a softmax over distances to the prototypes in the embedding space ”) grouping all l*β samples into a new task T (Section 1 Introduction “ Notably, this model utilizes sampled mini-batches called episodes during training, where each episode is designed to mimic the few-shot task by subsampling classes as well as data points. ” And section 2.6 Design choices “ A straightforward way to construct episodes, used in Vinyals et al. [32] and Ravi and Larochelle [24], is to choose Nc classes and NS support points per class in order to match the expected situation at test-time. ”) Kamalov teaches: estimating a probability density function (PDF) of a distance distribution of samples from R, which do not have a label k x , to a centroid of all samples from R with label k x in the feature domain, respectively, where 1 ≤x≤ l ; (Section 3. KDE sampling “ Nonparametric density estimation is an important tool in statistical data analysis. It is used to model the distribution of a variable based on a random sample. The resulting density function can be utilized to investigate various properties of the variable. Let { x 1 , x 2 , . . . , x n } be an i.i.d. sample drawn from an unknown probability density function f. Then the kernel density estimate of f is given by ”) drawing β samples from all groups G, where G ≠Gy , as per the PDF estimated from R; (Section 3. KDE Sampling: Equation 6 shoes this “ Given a sample { x 1 , x 2 , . . . x n } of d -dimensional random sample vectors drawn from a distribution described by a density function f the kernel density estimate is defined to be [Equation 6]”) Parnami teaches: Corresponding each task T to a keyword-spotting (KWS) problem, each task T having a positive class called target spoken keyword (INV) and a negative class called non-target keywork (OOV); (Section 4. Few-Shot Google Speech Command Dataset “ Grouping: To train our KWS system to detect if an input query is an unknown keyword (not present in S), we group our keywords into two categories: Core and Unknown. Key words having more than 1000 speakers are considered as core words and the rest are put in the category of unknown words. ”) Snell, Kamalov and Parnami are considered analogous art to the claimed invention because they are in the same field of endeavor being sampling data and meta learning . It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the prototypical network of Snell with kernel density estimation sampling of Kamalov and with the keyword problem solving of Parnami . One would want to do this to solve keyword recognition problems ( Parnami introduction ). Regarding claim 6 , Snell in view of Kamalov and Parnami 5 teaches claim as outlined above. Parnami further teaches: all INV from other tasks correspond to keywords that are different from the tasks in task T. (Section 4. Few-Shot Google Speech Command Dataset “ Balancing: Next, we balance the dataset so that all key words in a group have the same number of samples. As a result, we have 30 core keywords each with 1062 samples and 5 unknown keywords each with 386 samples and where all samples for a particular keyword come from a different speaker. ” Regarding claim 7 , S nell in view of Kamalov and Parnami 5 teaches claim as outlined above. Parnami further teaches: a target spoken keyword (INV) represents keywords inside a vocabulary. (Section 4. Few-Shot Google Speech Command Dataset and Table 1. Shows the target keyword and the non-target keyword) Regarding claim 8 , Snell in view of Kamalov and Parnami 5 teaches claim as outlined above. Parnami further teaches: a non-target keyword (OOV) represents keywords out of a vocabulary. (Section 4. Few-Shot Google Speech Command Dataset and Table 1. Shows the target keyword and the non-target keyword) Regarding claim 9 , S nell in view of Kamalov and Parnami 4 teaches claim as outlined above. Parnami further teaches: the INV samples for each task are chosen based on available audio segments. (Section 4. Few-Shot Google Speech Command Dataset and Table 1. Shows the dataset which is comprised of audio segments.) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT DANIEL PATRICK GRUSZKA whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-5259 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT M-F 9:00 AM - 6:00 PM ET . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Li Zhen can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 272-3768 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL GRUSZKA/ Examiner, Art Unit 2121 /Li B. Zhen/ Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Jul 24, 2023
Application Filed
Mar 03, 2026
Non-Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month