Prosecution Insights
Last updated: April 19, 2026
Application No. 17/943,022

METHODS AND APPARATUS TO TRAIN AN ARTIFICIAL INTELLIGENCE-BASED MODEL

Non-Final OA §101§103
Filed
Sep 12, 2022
Examiner
LEWIS, MATTHEW LEE
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
The Nielsen Company (US), LLC
OA Round
1 (Non-Final)
0%
Grant Probability
At Risk
1-2
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 3 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
30 currently pending
Career history
33
Total Applications
across all art units

Statute-Specific Performance

§101
33.9%
-6.1% vs TC avg
§103
35.9%
-4.1% vs TC avg
§102
20.8%
-19.2% vs TC avg
§112
9.4%
-30.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§101 §103
Detailed Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 1 & 15 are objected to because of the following informalities: “a characteristic a sinusoidal signal” should read as “a characteristic of a sinusoidal signal”. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1, in Step 1 of the 101-analysis set forth in MPEP 2106, the claim recites “An apparatus to train an artificial intelligence (AI)-based model”. An apparatus is one of the four statutory categories of invention. In Step 2a Prong 1 of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mathematical process but for recitation of generic computer components: “generate a location value for a neuron in an AI-based model” (In view of the specification at [0037-0038], this is directed to a mathematical process, which recites an abstract idea (MPEP 2106).) “adjust a characteristic a sinusoidal signal based on a misclassification output by the AI-based model” (In view of the specification at [0037-0038], this is directed to a mathematical process, which recites an abstract idea (MPEP 2106).) “determine that a trajectory of the sinusoidal signal is within a threshold distance of the location value” (In view of the specification at [0037-0038], this is directed to a mathematical process, which recites an abstract idea (MPEP 2106).) “adjust the location value in response to the trajectory being within the threshold distance” (In view of the specification at [0037-0038], this is directed to a mathematical process, which recites an abstract idea (MPEP 2106).) “adjust a weight that corresponds to the neuron based on the adjusted location value” (In view of the specification at [0037-0038], this is directed to a mathematical process, which recites an abstract idea (MPEP 2106).) If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mathematical process but for the recitation of generic computer components, then it falls within the mathematical process grouping of abstract ideas. According, the claim “recites” an abstract idea. In Step 2a Prong 2 of the 101-analysis set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application: “An apparatus to train an artificial intelligence (AI)-based model, the apparatus comprising: memory; computer readable instructions; and processor circuitry to execute the computer readable instructions to…” (Uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).) Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea. In Step 2b of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, additional element (vi) recites use of a computer as a tool to perform the abstract idea, which is not indicative of significantly more. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Regarding claim 2, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 2 recites “wherein the location value corresponds to an x-coordinate and a y-coordinate” (In step 2a, prong 2, this recites generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h).) In step 2B, generally linking the use of the judicial exception to a particular technological environment or field of use is not indicative of significantly more.) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claim 3, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 3 recites “wherein the characteristic includes at least one of a shift, a frequency, a period, a number of sinusoids, an offset, a number of points, a height, a width, or an amplitude” (In step 2a, prong 2, this recites generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h).) In step 2B, generally linking the use of the judicial exception to a particular technological environment or field of use is not indicative of significantly more.) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claim 4, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 4 recites the following additional abstract ideas: “compare the output of the AI-based model to an output of the training data to generate an error” (This limitation recites a mental process. A person can mentally evaluate the output of an AI-based model in comparison with training data and make a judgement to generate an indication of error (MPEP 2106).) “adjust the characteristic of the sinusoidal signal based on the error” (In view of the specification at [0037-0038], this is directed to a mathematical process, which recites an abstract idea (MPEP 2106).) Further, claim 4 recites “input training data into the AI-based model to generate an output” (In step2A, prong 2, this recites mere instructions to apply the judicial exception (MPEP 2106.05(f).) In step 2B, mere instructions to apply the judicial exception is not indicative of significantly more.) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claim 5, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 5 recites the following additional abstract idea: “wherein the processor circuitry is to adjust the location value based on the trajectory of the sinusoidal signal” (In view of the specification at [0037-0038], this is directed to a mathematical process, which recites an abstract idea (MPEP 2106).) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claim 6, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 6 recites the following additional abstract idea: “wherein the weight corresponds to a distance between the neuron and a neuron of a subsequent layer of the AI- based model” (In view of the specification at [0037-0038], this is directed to a mathematical process, which recites an abstract idea (MPEP 2106).) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claim 7, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 7 recites “wherein the processor circuitry is to deploy the AI-based model” (In step2A, prong 2, this recites mere instructions to apply the judicial exception (MPEP 2106.05(f).) In step 2B, mere instructions to apply the judicial exception is not indicative of significantly more.) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claim 8, in Step 1 of the 101-analysis set forth in MPEP 2106, the claim recites “A non-transitory computer readable medium comprising instructions”. A non-transitory medium is within one of the four statutory categories of invention. In Step 2a Prong 1 of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mathematical process but for recitation of generic computer components: “determine a location of a neuron in an artificial intelligence (AI)-based model” (In view of the specification at [0037-0038], this is directed to a mathematical process, which recites an abstract idea (MPEP 2106).) “adjust a sinusoidal signal based on an output of the AI-based model” (In view of the specification at [0037-0038], this is directed to a mathematical process, which recites an abstract idea (MPEP 2106).) “detect that a trajectory of the sinusoidal signal is within a threshold distance of the location of the neuron” (In view of the specification at [0037-0038], this is directed to a mathematical process, which recites an abstract idea (MPEP 2106).) “adjust the location of the neuron in response to the trajectory being within the threshold distance” (In view of the specification at [0037-0038], this is directed to a mathematical process, which recites an abstract idea (MPEP 2106).) “tune a weight corresponding to the neuron based on the adjusted location of the neuron” (In view of the specification at [0037-0038], this is directed to a mathematical process, which recites an abstract idea (MPEP 2106).) If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mathematical process but for the recitation of generic computer components, then it falls within the mathematical process grouping of abstract ideas. According, the claim “recites” an abstract idea. In Step 2a Prong 2 of the 101-analysis set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application: “A non-transitory computer readable medium comprising instructions which, when executed, cause one or more processors to at least…” (Uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).) Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea. In Step 2b of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, additional element (vi) recites use of a computer as a tool to perform the abstract idea, which is not indicative of significantly more. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Regarding claim 9, it is dependent upon claim 8, and thereby incorporates the limitations of, and corresponding analysis applied to claim 8. Further, claim 9 recites “wherein the location corresponds to a location on a coordinate plane” (In step 2a, prong 2, this recites generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h).) In step 2B, generally linking the use of the judicial exception to a particular technological environment or field of use is not indicative of significantly more.) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claims 10-13, they are dependent upon claim 8, and thereby incorporate the limitations of, and corresponding analysis applied to claim 8. Further, claims 10-13 comprise similar additional limitations as claims 3-6, respectively, and are rejected under the same rationale. Regarding claim 14, it is dependent upon claim 8, and thereby incorporates the limitations of, and corresponding analysis applied to claim 8. Further, claim 14 recites “wherein the instructions cause the one or more processors to store the AI-based model” (In step 2A, prong 2, this recites insignificant extra-solution activity (mere data storage) to the judicial exception (MPEP 2106.05(g).) In step 2B, the courts have found steps that store and retrieve information in memory to be a well-understood, routine, and conventional activity, which is not indicative of significantly more (Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015)).) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claim 15, in Step 1 of the 101-analysis set forth in MPEP 2106, the claim recites “An apparatus to train an artificial intelligence (AI)-based model”. An apparatus is one of the four statutory categories of invention. In Step 2a Prong 1 of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mathematical process but for recitation of generic computer components: “location determination circuitry to determine a location value for a neuron in an AI-based model” (In view of the specification at [0037-0038], this is directed to a mathematical process, which recites an abstract idea (MPEP 2106).) “sinusoid generation circuitry to change a characteristic a sinusoidal signal when the AI-based model misclassifies data” (In view of the specification at [0037-0038], this is directed to a mathematical process, which recites an abstract idea (MPEP 2106).) “the location determination circuitry to, determine that a trajectory of the sinusoidal signal is within a threshold distance of the location value” (In view of the specification at [0037-0038], this is directed to a mathematical process, which recites an abstract idea (MPEP 2106).) “change the location value in response to the trajectory being within the threshold distance” (In view of the specification at [0037-0038], this is directed to a mathematical process, which recites an abstract idea (MPEP 2106).) “weight determination circuitry to change a weight that corresponds to the neuron based on the location value” (In view of the specification at [0037-0038], this is directed to a mathematical process, which recites an abstract idea (MPEP 2106).) If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mathematical process but for the recitation of generic computer components, then it falls within the mathematical process grouping of abstract ideas. According, the claim “recites” an abstract idea. In Step 2a Prong 2 of the 101-analysis set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application: “An apparatus to train an artificial intelligence (AI)-based model, the apparatus comprising: interface circuitry to obtain training data; and processor circuitry including one or more of: at least one of a central processor unit, a graphics processor unit, or a digital signal processor, the at least one of the central processor unit, the graphics processor unit, or the digital signal processor having control circuitry to control data movement within the processor circuitry, arithmetic and logic circuitry to perform one or more first operations corresponding to instructions, and one or more registers to store a result of the one or more first operations, the instructions in the apparatus; a Field Programmable Gate Array (FPGA), the FPGA including logic gate circuitry, a plurality of configurable interconnections, and storage circuitry, the logic gate circuitry and the plurality of the configurable interconnections to perform one or more second operations, the storage circuitry to store a result of the one or more second operations; or Application Specific Integrated Circuitry (ASIC) including logic gate circuitry to perform one or more third operations; the processor circuitry to perform at least one of the first operations, the second operations, or the third operations to instantiate…” (Uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).) Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea. In Step 2b of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, additional element (vi) recites use of a computer as a tool to perform the abstract idea, which is not indicative of significantly more. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Regarding claim 16, it is dependent upon claim 15, and thereby incorporates the limitations of, and corresponding analysis applied to claim 15. Further, claim 16 recites “wherein the location value corresponds to a location on a grid” (In step 2a, prong 2, this recites generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h).) In step 2B, generally linking the use of the judicial exception to a particular technological environment or field of use is not indicative of significantly more.) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claims 17-20, they are dependent upon claim 15, and thereby incorporate the limitations of, and corresponding analysis applied to claim 15. Further, claims 17-20 comprise similar additional limitations as claims 3-6, respectively, and are rejected under the same rationale. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 4-8, 11-15, & 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Taghian, S. et al. “Binary Sine Cosine Algorithms for Feature Selection from Medical Data.” Available at https://arxiv.org/abs/1911.07805 on 15 November 2019 (hereafter, TAGHIAN), and further in view of Martinetz, T. et al. “A ‘Neural-Gas’ Network Learns Topologies.” Available at https://www.ks.uiuc.edu/Publications/Papers/PDF/MART91B/ on 22 September 2000 (hereafter, MARTINETZ), and further in view of Singh, H. “Everything you Need to Know About Hardware Requirements for Machine Learning.” Available at https://www.einfochips.com/blog/everything-you-need-to-know-about-hardware-requirements-for-machine-learning/ on 24 February 2019 (hereafter, SINGH). Regarding claim 1, TAGHIAN teaches “generate a location value for a neuron in an AI-based model”: ([Abstract] “A well-constructed classification model (an AI-based model) highly depends on input feature subsets from a dataset, which may contain redundant, irrelevant, or noisy features. This challenge can be worse while dealing with medical datasets. The main aim of feature selection as a pre-processing task is to eliminate these features and select the most effective ones. In the literature, metaheuristic algorithms show a successful performance to find optimal feature subsets. In this paper, two binary metaheuristic algorithms named S-shaped binary Sine Cosine Algorithm (SBSCA) and V-shaped binary Sine Cosine Algorithm (VBSCA) are proposed for feature selection from the medical data. In these algorithms, the search space remains continuous, while a binary position vector is generated by two transfer functions S-shaped and V-shaped for each solution (Here, we see a binary position vector/location value generated for the AI model’s neurons. The binary position vector tracks the position/location of the neuron for the AI-based model so that it may be adjusted, which is equivalent to the claimed limitation). The proposed algorithms are compared with four latest binary optimization algorithms over five medical datasets from the UCI repository. The experimental results confirm that using both bSCA variants enhance the accuracy of classification on these medical datasets compared to four other algorithms.”) Further, TAGHIAN teaches “adjust a characteristic a sinusoidal signal based on a misclassification output by the AI-based model”: ([Abstract] “A well-constructed classification model (an AI-based model) highly depends on input feature subsets from a dataset, which may contain redundant, irrelevant, or noisy features. This challenge can be worse while dealing with medical datasets. The main aim of feature selection as a pre-processing task is to eliminate these features and select the most effective ones. In the literature, metaheuristic algorithms show a successful performance to find optimal feature subsets. In this paper, two binary metaheuristic algorithms named S-shaped binary Sine Cosine Algorithm (SBSCA) and V-shaped binary Sine Cosine Algorithm (VBSCA) (the use of sine functions in a neural network or AI-based model indicates “sinusoidal signals”) are proposed for feature selection from the medical data. In these algorithms, the search space remains continuous, while a binary position vector is generated by two transfer functions S-shaped and V-shaped for each solution. The proposed algorithms are compared with four latest binary optimization algorithms over five medical datasets from the UCI repository. The experimental results confirm that using both bSCA variants enhance the accuracy of classification on these medical datasets compared to four other algorithms.”) Here, we see sine functions in use in neural networks, meaning that sinusoidal signals are present, in addition to the fact that the neural network is used as a classification model, which is an AI-based model. And further: ([Pages 5-6, Section 5. Binary Cosine Algorithm for the Feature Selection Problem] “Feature selection is a process of selecting relevant features of a dataset in order to improve the learning performance, decreasing the computational complexity, and building a better classification model (avoid misclassification). Based on the nature of the feature selection problem, a binary algorithm is usually applied to find an optimum feature subset. Every individual in the binary algorithms is represented as a binary vector with N entries, where N is the total number of features in a dataset. Each vector has the value 0 or 1, where zero indicates that the feature is not selected whereas one represents that the feature is selected. For this reason, in this work, two proposed binary versions of the SCA are applied in the feature selection problem. Feature selection can be considered as a multi-objective problem in which two contrary objectives must be satisfied. These two objectives are the maximum accuracy, and the other is the minimum number of selected features. The fitness function that is used to evaluate each individual is shown in Eq.7. PNG media_image2.png 57 482 media_image2.png Greyscale where ER(D) is the classification error, |R| is the number of selected features, |C| is the total number of features in the dataset, α and 𝛽 are two parameters related to the importance of accuracy and number of selected features, α PNG media_image3.png 12 11 media_image3.png Greyscale [0, 1] and 𝛽=1-α [29].”) This section shows that the number of selected features in comparison to the total number of features in the dataset is continuously modified until “accuracy is optimized”/ “classification error is reduced”, which means as classification error is high (a misclassification), the feature selection (a characteristic of the sinusoid signal) is modified, in an effort to achieve optimal accuracy. TAGHIAN fails to explicitly teach “An apparatus to train an artificial intelligence (AI)-based model, the apparatus comprising: memory; computer readable instructions; and processor circuitry to execute the computer readable instructions” and “determine that a trajectory of the sinusoidal signal is within a threshold distance of the location value; adjust the location value in response to the trajectory being within the threshold distance; and adjust a weight that corresponds to the neuron based on the adjusted location value”. However, analogous art, MARTINETZ does teach “determine that a trajectory of the… signal is within a threshold distance of the location value; adjust the location value in response to the trajectory being within the threshold distance; and adjust a weight that corresponds to the neuron based on the adjusted location value”: ([Pages 398-399, Section 2. The “Neural-Gas Network”] “In the approach we present here the synaptic weights wi are adapted independently of any topological arrangement of the neural units within the neural net. Instead, the adaptation steps are affected by the topological arrangement of the receptive fields within the input space. Since the synaptic weight changes PNG media_image4.png 29 51 media_image4.png Greyscale are not determined by the arrangement of the neural units within a topologically prestructured lattice, but by the relative distances between the neural units within the input space, we chose the name "neural-gas" network. Information about the arrangement of the receptive fields within the input space is implicitly given by the set of distortions Dv = {||v -wi||, i = 1, ..., N} associated with each v. Each time an input signal v is presented, the ordering of the elements of the set Dv determines the adjustment of the synaptic weights wi… The resulting connectivity matrix Cij at the end of the learning procedure represents the similarity, i.e., the neighborhood relationships, among the input data... … To capture the neighborhood relationships between the reference vectors wi each time an input stimulus is presented we establish a connection between the neural unit i0, which had its wi closest to v, and the neural unit i1, which had its wi second closest to the input signal. The creation of this connection is described by setting the matrix element Ci0i1 from zero to one…. … In Fig.1 we show schematically which of the neural units are connected by the introduced adaptation rule. The neural unit denoted by i is the "winner" for input signals presented within the shaded area, the receptive field or Voronoi polygon of neuron i. The numbers 1, ... , 6 denote the neural units which are second closest to input signals appearing within the correspondingly numbered subregions of the grey shaped area. Only to the neural units 1, ... , 6 the receptive fields of which share common borders with its own receptive field neural unit i develops connections. PNG media_image5.png 613 987 media_image5.png Greyscale Figure 1 was placed here by the examiner for convenience of reference. Since it was cited above. In the citations, we see it being determined when an input vector is within a threshold or ranked distance of each neuron’s location to decide which neuron to adjust, similar to the claimed trajectory, which acts as a moving input probe. The prior art here determines if a neuron lies within a predetermined distance of an input point, which provides a functionally identical distance-based activation. The claimed adjustment of neuron location when the sinusoid passes nearby is an explicit variant of the classic rule “move the neuron’s prototype toward the input if within neighborhood radius” that we see here, as we see it directly moving neuron locations toward or away from the input when the neuron is within a defined neighborhood/threshold. We also see it explicitly adjusting the corresponding weights based on the adjusted location values. When combined with TAGHIAN, it would be naturally obvious to apply the same idea to sine-based activations.”) It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of TAGHIAN with the teachings of MARTINETZ because both references teach optimal methods for neural networks/AI models. One of ordinary skill in the art would be motivated to do so because, as MARTINETZ points out at the end of the introduction on page 398, the teachings of MARTINETZ provides “a more flexible network capable of (i) quantizing topologically heterogeneously structured manifolds and (ii) learning the similarity relationships among the input signals without the necessity of prespecifying a network topology.” TAGHIAN in view of MARTINETZ still fails to explicitly teach “An apparatus to train an artificial intelligence (AI)-based model, the apparatus comprising: memory; computer readable instructions; and processor circuitry to execute the computer readable instructions”. However, analogous art, SINGH, does teach this: ([Paragraph 5-6] “There are four steps for preparing a machine learning model: Preprocessing input data Training the deep learning model Storing the trained deep learning model Deployment of the model Among all these, training the machine learning model is the most computationally intensive task. Now if we talk about training the model, which generally requires a lot of computational power, the process could be frustrating if done without the right hardware.”) And further: ([Hardware requirements for machine learning] “The first thing you should determine is what kind of resource does your task requires. Let’s have a look how different tasks will have different hardware requirements… ... A laptop with a dedicated graphics card of high end should do the work (An apparatus containing a memory, computer-readable instructions, and processor circuitry to execute the computer readable instructions). There are a few high end (and expectedly heavy) laptops like Nvidia GTX 1080 (8 GB VRAM) (explicit mention of memory), which can train an average of ~14k examples/second. In addition, you can build your own PC with a reasonable CPU and a powerful GPU (explicit mention of processor circuitry) …”) It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of TAGHIAN in view of MARTINETZ with the teachings of SINGH because TAGHIAN in view of MARTINETZ teaches of methods for training artificial-intelligence based models while SINGH teaches hardware requirements for training and using artificial intelligence models. One of ordinary skill in the art would be motivated to do so because, without meeting the hardware requirements for any technology, the technology will not function. Further, as pointed out by SINGH, and cited earlier in paragraphs 5-6, “the process could be frustrating if done without the right hardware.” Regarding claim 4, TAGHIAN in view of MARTINETZ & SINGH teaches the limitations of claim 1. Further, TAGHIAN teaches “compare the output of the AI-based model to an output of the training data to generate an error; and adjust the characteristic of the sinusoidal signal based on the error”: ([Abstract] “A well-constructed classification model (an AI-based model) highly depends on input feature subsets from a dataset, which may contain redundant, irrelevant, or noisy features. This challenge can be worse while dealing with medical datasets. The main aim of feature selection as a pre-processing task is to eliminate these features and select the most effective ones. In the literature, metaheuristic algorithms show a successful performance to find optimal feature subsets. In this paper, two binary metaheuristic algorithms named S-shaped binary Sine Cosine Algorithm (SBSCA) and V-shaped binary Sine Cosine Algorithm (VBSCA) (the use of sine functions in a neural network or AI-based model indicates “sinusoidal signals”) are proposed for feature selection from the medical data. In these algorithms, the search space remains continuous, while a binary position vector is generated by two transfer functions S-shaped and V-shaped for each solution. The proposed algorithms are compared with four latest binary optimization algorithms over five medical datasets from the UCI repository. The experimental results confirm that using both bSCA variants enhance the accuracy of classification on these medical datasets compared to four other algorithms.”) Here, we see sine functions in use in neural networks, meaning that sinusoidal signals are present, in addition to the fact that the neural network is used as a classification model, which is an AI-based model. And further: ([Pages 5-6, Section 5. Binary Cosine Algorithm for the Feature Selection Problem] “Feature selection is a process of selecting relevant features of a dataset in order to improve the learning performance, decreasing the computational complexity, and building a better classification model (avoid misclassification). Based on the nature of the feature selection problem, a binary algorithm is usually applied to find an optimum feature subset. Every individual in the binary algorithms is represented as a binary vector with N entries, where N is the total number of features in a dataset. Each vector has the value 0 or 1, where zero indicates that the feature is not selected whereas one represents that the feature is selected. For this reason, in this work, two proposed binary versions of the SCA are applied in the feature selection problem. Feature selection can be considered as a multi-objective problem in which two contrary objectives must be satisfied. These two objectives are the maximum accuracy, and the other is the minimum number of selected features. The fitness function that is used to evaluate each individual is shown in Eq.7. PNG media_image2.png 57 482 media_image2.png Greyscale where ER(D) is the classification error, |R| is the number of selected features, |C| is the total number of features in the dataset, α and 𝛽 are two parameters related to the importance of accuracy and number of selected features, α PNG media_image3.png 12 11 media_image3.png Greyscale [0, 1] and 𝛽=1-α [29].”) This section shows that the number of selected features in comparison to the total number of features in the dataset is continuously modified until “accuracy is optimized”/ “classification error is reduced”, which means as classification error is high (a misclassification), the feature selection (a characteristic of the sinusoid signal) is modified, in an effort to achieve optimal accuracy. Further, SINGH teaches “wherein the processor circuitry is to: input training data into the AI-based model to generate an output”: ([Paragraph 5-6] “There are four steps for preparing a machine learning model (AI-based model): Preprocessing input data (input training data into the AI-based model) Training the deep learning model Storing the trained deep learning model Deployment of the model (deploying the model to generate an output) Among all these, training the machine learning model is the most computationally intensive task. Now if we talk about training the model, which generally requires a lot of computational power, the process could be frustrating if done without the right hardware.”) Regarding claim 5, TAGHIAN in view of MARTINETZ & SINGH teaches the limitations of claim 1. Further, MARTINETZ teaches “wherein the processor circuitry is to adjust the location value based on the trajectory of the sinusoidal signal”: ([Pages 398-399, Section 2. The “Neural-Gas Network”] “In the approach we present here the synaptic weights wi are adapted independently of any topological arrangement of the neural units within the neural net. Instead, the adaptation steps are affected by the topological arrangement of the receptive fields within the input space. Since the synaptic weight changes PNG media_image4.png 29 51 media_image4.png Greyscale are not determined by the arrangement of the neural units within a topologically prestructured lattice, but by the relative distances between the neural units within the input space, we chose the name "neural-gas" network. Information about the arrangement of the receptive fields within the input space is implicitly given by the set of distortions Dv = {||v -wi||, i = 1, ..., N} associated with each v. Each time an input signal v is presented, the ordering of the elements of the set Dv determines the adjustment of the synaptic weights wi… The resulting connectivity matrix Cij at the end of the learning procedure represents the similarity, i.e., the neighborhood relationships, among the input data... … To capture the neighborhood relationships between the reference vectors wi each time an input stimulus is presented we establish a connection between the neural unit i0, which had its wi closest to v, and the neural unit i1, which had its wi second closest to the input signal. The creation of this connection is described by setting the matrix element Ci0i1 from zero to one…. … In Fig.1 we show schematically which of the neural units are connected by the introduced adaptation rule. The neural unit denoted by i is the "winner" for input signals presented within the shaded area, the receptive field or Voronoi polygon of neuron i. The numbers 1, ... , 6 denote the neural units which are second closest to input signals appearing within the correspondingly numbered subregions of the grey shaped area. Only to the neural units 1, ... , 6 the receptive fields of which share common borders with its own receptive field neural unit i develops connections. PNG media_image5.png 613 987 media_image5.png Greyscale Figure 1 was placed here by the examiner for convenience of reference. Since it was cited above. In the citations, we see it being determined when an input vector is within a threshold or ranked distance of each neuron’s location to decide which neuron to adjust, similar to the claimed trajectory, which acts as a moving input probe. The prior art here determines if a neuron lies within a predetermined distance of an input point, which provides a functionally identical distance-based activation. The claimed adjustment of neuron location when the sinusoid passes nearby is an explicit variant of the classic rule “move the neuron’s prototype toward the input if within neighborhood radius” that we see here, as we see it directly moving neuron locations toward or away from the input when the neuron is within a defined neighborhood/threshold. We also see it explicitly adjusting the corresponding weights based on the adjusted location values. When combined with TAGHIAN, it would be naturally obvious to apply the same idea to sine-based activations.”) Regarding claim 6, TAGHIAN in view of MARTINETZ & SINGH teaches the limitations of claim 1. Further, MARTINETZ teaches “wherein the weight corresponds to a distance between the neuron and a neuron of a subsequent layer of the AI- based model”: ([Pages 398-399, Section 2. The “Neural-Gas Network”] “In the approach we present here the synaptic weights wi are adapted independently of any topological arrangement of the neural units within the neural net. Instead, the adaptation steps are affected by the topological arrangement of the receptive fields within the input space. Since the synaptic weight changes PNG media_image4.png 29 51 media_image4.png Greyscale are not determined by the arrangement of the neural units within a topologically prestructured lattice, but by the relative distances between the neural units within the input space…”) This specifically describes the weights corresponding to a distance between neuron and a neuron of a subsequent layer. Regarding claim 7, TAGHIAN in view of MARTINETZ & SINGH teaches the limitations of claim 1. Further, SINGH teaches “wherein the processor circuitry is to deploy the AI-based model”: ([Paragraph 5-6] “There are four steps for preparing a machine learning model (AI-based model): Preprocessing input data (input training data into the AI-based model) Training the deep learning model Storing the trained deep learning model Deployment of the model (deploying the model to generate an output) Among all these, training the machine learning model is the most computationally intensive task. Now if we talk about training the model, which generally requires a lot of computational power, the process could be frustrating if done without the right hardware.”) Regarding claim 8, TAGHIAN teaches “determine a location of a neuron in an artificial intelligence (AI)-based model”: ([Abstract] “A well-constructed classification model (an AI-based model) highly depends on input feature subsets from a dataset, which may contain redundant, irrelevant, or noisy features. This challenge can be worse while dealing with medical datasets. The main aim of feature selection as a pre-processing task is to eliminate these features and select the most effective ones. In the literature, metaheuristic algorithms show a successful performance to find optimal feature subsets. In this paper, two binary metaheuristic algorithms named S-shaped binary Sine Cosine Algorithm (SBSCA) and V-shaped binary Sine Cosine Algorithm (VBSCA) are proposed for feature selection from the medical data. In these algorithms, the search space remains continuous, while a binary position vector is generated by two transfer functions S-shaped and V-shaped for each solution (Here, we see a binary position vector/location value generated for the AI model’s neurons. The binary position vector tracks the position/location of the neuron for the AI-based model so that it may be adjusted, which is equivalent to the claimed limitation). The proposed algorithms are compared with four latest binary optimization algorithms over five medical datasets from the UCI repository. The experimental results confirm that using both bSCA variants enhance the accuracy of classification on these medical datasets compared to four other algorithms.”) Further, TAGHIAN teaches “adjust a sinusoidal signal based on an output of the AI-based model”: ([Abstract] “A well-constructed classification model (an AI-based model) highly depends on input feature subsets from a dataset, which may contain redundant, irrelevant, or noisy features. This challenge can be worse while dealing with medical datasets. The main aim of feature selection as a pre-processing task is to eliminate these features and select the most effective ones. In the literature, metaheuristic algorithms show a successful performance to find optimal feature subsets. In this paper, two binary metaheuristic algorithms named S-shaped binary Sine Cosine Algorithm (SBSCA) and V-shaped binary Sine Cosine Algorithm (VBSCA) (the use of sine functions in a neural network or AI-based model indicates “sinusoidal signals”) are proposed for feature selection from the medical data. In these algorithms, the search space remains continuous, while a binary position vector is generated by two transfer functions S-shaped and V-shaped for each solution. The proposed algorithms are compared with four latest binary optimization algorithms over five medical datasets from the UCI repository. The experimental results confirm that using both bSCA variants enhance the accuracy of classification on these medical datasets compared to four other algorithms.”) Here, we see sine functions in use in neural networks, meaning that sinusoidal signals are present, in addition to the fact that the neural network is used as a classification model, which is an AI-based model. And further: ([Pages 5-6, Section 5. Binary Cosine Algorithm for the Feature Selection Problem] “Feature selection is a process of selecting relevant features of a dataset in order to improve the learning performance, decreasing the computational complexity, and building a better classification model (avoid misclassification). Based on the nature of the feature selection problem, a binary algorithm is usually applied to find an optimum feature subset. Every individual in the binary algorithms is represented as a binary vector with N entries, where N is the total number of features in a dataset. Each vector has the value 0 or 1, where zero indicates that the feature is not selected whereas one represents that the feature is selected. For this reason, in this work, two proposed binary versions of the SCA are applied in the feature selection problem. Feature selection can be considered as a multi-objective problem in which two contrary objectives must be satisfied. These two objectives are the maximum accuracy, and the other is the minimum number of selected features. The fitness function that is used to evaluate each individual is shown in Eq.7. PNG media_image2.png 57 482 media_image2.png Greyscale where ER(D) is the classification error, |R| is the number of selected features, |C| is the total number of features in the dataset, α and 𝛽 are two parameters related to the importance of accuracy and number of selected features, α PNG media_image3.png 12 11 media_image3.png Greyscale [0, 1] and 𝛽=1-α [29].”) This section shows that the number of selected features in comparison to the total number of features in the dataset is continuously modified until “accuracy is optimized”/ “classification error is reduced”, which means as classification error is high (a misclassification), the feature selection (a characteristic of the sinusoid signal) is modified, in an effort to achieve optimal accuracy. TAGHIAN fails to explicitly teach “A non-transitory computer readable medium comprising instructions which, when executed, cause one or more processors to…” and “detect that a trajectory of the sinusoidal signal is within a threshold distance of the location of the neuron; adjust the location of the neuron in response to the trajectory being within the threshold distance; and tune a weight corresponding to the neuron based on the adjusted location of the neuron”. However, analogous art, MARTINETZ does teach “detect that a trajectory of the… signal is within a threshold distance of the location of the neuron; adjust the location of the neuron in response to the trajectory being within the threshold distance; and tune a weight corresponding to the neuron based on the adjusted location of the neuron”: ([Pages 398-399, Section 2. T
Read full office action

Prosecution Timeline

Sep 12, 2022
Application Filed
Nov 01, 2025
Non-Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month