Prosecution Insights
Last updated: April 19, 2026
Application No. 18/012,408

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, NON-TRANSITORY COMPUTER READABLE MEDIUM

Final Rejection §101§103
Filed
Dec 22, 2022
Examiner
ABOU EL SEOUD, MOHAMED
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
NEC Corporation
OA Round
2 (Final)
38%
Grant Probability
At Risk
3-4
OA Rounds
4y 2m
To Grant
77%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
80 granted / 208 resolved
-16.5% vs TC avg
Strong +39% interview lift
Without
With
+38.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
46 currently pending
Career history
254
Total Applications
across all art units

Statute-Specific Performance

§101
16.1%
-23.9% vs TC avg
§103
48.2%
+8.2% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
14.7%
-25.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 208 resolved cases

Office Action

§101 §103
DETAILED ACTION This office action is responsive to the response filed 1/2/2026. The application contains claims 1, 6-8, 10-12, all examined and rejected. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. Claim limitations in claims 1-10 have been interpreted under 35 U.S.C. 112(f) or 35 U.S.C. 112 (pre-AIA ), sixth paragraph, because it uses a non-structural term “module” coupled with functional language without reciting sufficient structure to achieve the function. Furthermore, the non-structural term is not preceded by a structural modifier. Claim 10 recites the limitation “hard category estimator” coupled with functional language without reciting sufficient structure to achieve the function. Since these claim limitations invoke 35 U.S.C. 112(f) or 35 U.S.C. 112 (pre-AIA ), sixth paragraph, claim 10 are interpreted to cover the corresponding structure described in the specification that achieves the claimed function, and equivalents thereof. A review of the specification shows that the following appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or 35 U.S.C. 112 (pre-AIA ), sixth paragraph limitation: Paragraph [0143] states, “information processing apparatus (e.g., information processing apparatus1, module100, 200, 300, 400, 500, or 600) includes a network interface1201, a processor1202 and a memory1203” Based on the guidelines announced from Federal Register Vol. 76, No. 27, this has been interpreted as encompassing a hardware or hardware in combination with software implementation of the module, but not a pure software implementation. If applicant wishes to provide further explanation or dispute the examiner’s interpretation of the corresponding structure, applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action. Claimed modules also trigger interpretation of the claim language under 35 U.S.C. 112(f) or 35 U.S.C. 112 (pre-AIA ), sixth paragraph since they are considered a place holder for a corresponding structure in the specification. If applicant does not wish to have the claim limitation treated under 35 U.S.C. 112(f) or 35 U.S.C. 112 (pre-AIA ), sixth paragraph, applicant may amend the claim so that it will clearly not invoke 35 U.S.C. 112(f) or 35 U.S.C. 112 (pre-AIA ), sixth paragraph, or present a sufficient showing that the claim recites sufficient structure, material, or acts for performing the claimed function to preclude application of 35 U.S.C. 112(f) or 35 U.S.C. 112 (pre-AIA ), sixth paragraph. For more information, see MPEP § 2173 et seq. and Supplementary Examination Guidelines for Determining Compliance with 35 U.S.C. § 112 and for Treatment of Related Issues in Patent Applications, 76 FR 7162, 7167 (Feb. 9, 2011). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 6-8, 10-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 1 is rejected under 35 USC 101 because the claimed inventions are directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. While independent claims 1, 11 and 12 are each directed to a statutory category, it recites a series of steps pertaining to analyze received data to identify features that are used to predict machine failure, which appears to be directed to an abstract idea (mental process, mathematical concept). Claims 1, 6-8, 10-12 are rejected under 35 U.S.C. § 101 because the instant application is directed to non-patentable subject matter. Specifically, the claims are directed toward at least one judicial exception without reciting additional elements that amount to significantly more than the judicial exception. The rationale for this determination is in accordance with the guidelines of USPTO, applies to all statutory categories, and is explained in detail below. When considering subject matter eligibility under 35 U.S.C. 101, (1) it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. If the claim does fall within one of the statutory categories, (2a) it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea), and if so (2b), it must additionally be determined whether the claim is a patent-eligible application of the exception. If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim amounts to significantly more than the abstract idea itself. Examples of abstract ideas include certain methods of organizing human activities; a mental processes; and mathematical concepts, (2019 PEG) STEP 1. Per Step 1, the claims are determined to include machine, process, and manufacture, as in independent Claim 1, 11, and 12, and in the therefrom dependent claims. Therefore, the claims are directed to a statutory eligibility category. At step 2A, prong 1, The invention is directed to identifying features within received data that could be an indication of the probability of occurrence of a machine failure based on analyzed historic data which is akin to Mental Process (see Alice), As such, the claims include an abstract idea. When considering the limitations individually and as a whole the limitations directed to the abstract idea are: “estimate a soft category using predetermined parameters of a position, size and margin width of a rectangular pattern for classifying the Data Input as the positive data and the negative data”, “compare the estimated soft category label with the true Data labels for the Data Input and output a feedback on the predetermined parameters”, “modify the predetermined parameters to reduce a total loss to learn an optimal margined rectangular pattern for classifying the positive data and the negative data”, “penalize the rectangular pattern if the rectangular pattern covers a negative point” (Mental process, observation, evaluation and judgment), “using any of an off-the-shelf gradient and a line search based algorithm to determine the predetermined parameter to ensure that the total loss is a minimum” (Mental process, observation, evaluation and judgment, Mathematical concept), “terminating a training process for modifying the predetermined parameters and saving the modified parameters in a storage if a predetermined condition is met” (Mental process, observation, evaluation and judgment). The claim recites additional elements as “information processing apparatus”, “at least one memory storing instructions”, “at least one processor configured to execute the instructions” (“Using a computer as a tool to perform a mental process”, MPEP 2106.04(a)(2)(III)(C)); “receive a plurality of Data Inputs which includes positive data being inside a rectangular pattern and negative data being outside a rectangular pattern” (insignificant extra-solution activity, MPEP 2106.05(g)), “the total loss is a sum of a correctness loss and a regularization loss” (description of data, which is directed to generally linking the use of a judicial exception to a particular technological environment or field of use), “at least one processor configured to execute the instructions” (“Using a computer as a tool to perform a mental process”, MPEP 2106.04(a)(2)(III)(C)). This judicial exception is not integrated into a practical application. The elements are recited at a high level of generality, i.e. a generic computing system performing generic functions including generic processing of data. Accordingly the additional elements do not integrate the abstract into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Therefore the claims are directed to an abstract idea. (2019 Revised Patent Subject Matter Eligibility Guidance ("2019 PEG"). Thus, under Step 2A of the Mayo framework, the Examiner holds that the claims are directed to concepts identified as abstract. STEP 2B. Because the claims include one or more abstract ideas, the examiner now proceeds to Step 2B of the analysis, in which the examiner considers if the claims include individually or as an ordered combination limitations that are "significantly more" than the abstract idea itself. This includes analysis as to whether there is an improvement to either the "computer itself," "another technology," the "technical field," or significantly more than what is "well-understood, routine, or conventional" (WURC) in the related arts. The instant application includes in Claim 1 additional steps to those deemed to be abstract idea(s). When taken the steps individually, these steps are: “information processing apparatus”, “at least one memory storing instructions”, “at least one processor configured to execute the instructions” (“Using a computer as a tool to perform a mental process (“Using a computer as a tool to perform a mental process”, MPEP 2106.05(f)(2)); “receive a plurality of Data Inputs which includes positive data being inside a rectangular pattern and negative data being outside a rectangular pattern” (WELL-UNDERSTOOD, ROUTINE, CONVENTIONAL ACTIVITY, sending, receiving, displaying and processing data are common and basic functions in computer technology, MPEP 2106.05(d)(II)(i)), “the total loss is a sum of a correctness loss and a regularization loss” (description of data, which is directed to generally linking the use of a judicial exception to a particular technological environment or field of use, MPEP 2106.05(h)), “at least one processor configured to execute the instructions” (“Using a computer as a tool to perform a mental process”, MPEP 2106.05(f)(2)). In the instant case, Claim 1 is directed to above mentioned abstract idea. Technical functions such as receiving, and extracting are common and basic functions in computer technology. The individual limitations are recited at a high level and do not provide any specific technology or techniques to perform the functions claimed. In addition, when the claims are taken as a whole, as an ordered combination, the combination of steps does not add "significantly more" by virtue of considering the steps as a whole, as an ordered combination. The instant application, therefore, still appears only to implement the abstract idea to the particular technological environments using what is well-understood, routine, and conventional in the related arts. The steps are still a combination made to the abstract idea. The additional steps only add to those abstract ideas using well understood and conventional functions, and the claims do not show improved ways of, for example, an unconventional non-routine functions for analyzing model operations or updating the model that could then be pointed to as being "significantly more" than the abstract ideas themselves. Moreover, Examiner was not able to identify any "unconventional" steps, which, when considered in the ordered combination with the other steps, could have transformed the nature of the abstract idea previously identified. The instant application, therefore, still appears to only implement the abstract ideas to the particular technological environments using what is well-understood, routine, and conventional (WURC) in the related arts. Further, note that the limitations, in the instant claims, are done by the generically recited computing devices. The limitations are merely instructions to implement the abstract idea on a computing device that is recited in an abstract level and require no more than a generic computing devices to perform generic functions. Claim 12 recites a system comprising “A non-transitory computer readable medium storing a program for causing a computer to execute an information processing method” configured to perform the same method as set forth in claim 1, the added element of “A non-transitory computer readable medium storing a program for causing a computer to execute an information processing method” do not transform the judicial exception into a practical application because they are tantamount to a mere instruction to apply the judicial exception to a generic computer. The additional elements are also not sufficient to amount to significantly more than the judicial exception because the action of implementing the method on a general purpose computer with at least one processor and at least one memory is tantamount to a mere instruction to apply the judicial exception to a computer. Claim 12 is therefore rejected according to the same findings and rationale as provided above. Independent claim 11 are the same analogy and rejected using similar analysis as claim 1. CONCLUSION It is therefore determined that the instant application not only represents an abstract idea identified as such based on criteria defined by the Courts and on USPTO examination guidelines, but also lacks the capability to bring about "Improvements to another technology or technical field" (Alice), bring about "Improvements to the functioning of the computer itself" (Alice), "Apply the judicial exception with, or by use of, a particular machine" (Bilski), "Effect a transformation or reduction of a particular article to a different state or thing" (Diehr), "Add a specific limitation other than what is well-understood, routine and conventional in the field" (Mayo), "Add unconventional steps that confine the claim to a particular useful application" (Mayo), or contain "Other meaningful limitations beyond generally linking the use of the judicial exception to a particular technological environment" (Alice), transformed a traditionally subjective process performed by humans into a mathematically automated process executed on computers (McRO), or limitations directed to improvements in computer related technology, including claims directed to software (Enfish). The dependent claims, when considered individually and as a whole, likewise do not provide "significantly more" than the abstract idea for similar reasons as the independent claim. claims 6 disclose “one processor is further configured to implement receiving the Data Input and estimating a soft category using multiple rectangular patterns, and based on multiple Soft Category Estimators and a Smooth Max Selector configured to perform weighted averaging of soft category estimates, modifying the predetermined parameters to reduce a total loss to learn optimal margined non-overlapping rectangular patterns for classifying the Data Input as the positive data and the negative data” (description of data, which is directed to generally linking the use of a judicial exception to a particular technological environment or field of use). It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea. claims 7 disclose “wherein the total loss is a sum of a correctness loss, a regularization loss, and a Multiple Rectangle(MR) regularization loss configured to generate non-overlapping rectangular pattern” (description of data, which is directed to generally linking the use of a judicial exception to a particular technological environment or field of use). It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea. claims 8 disclose “wherein the MR regularization loss includes an overlap loss and a softening loss” (description of data, which is directed to generally linking the use of a judicial exception to a particular technological environment or field of use). It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea. claims 10 disclose “classifier comprising a hard category estimator configured to receive input data and estimate a category of the data point using a model learned by the information processing apparatus according to Claim1” (description of data, which is directed to generally linking the use of a judicial exception to a particular technological environment or field of use). It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea. The dependent claims which impose additional limitations also fail to claim patent eligible subject matter because the limitations cannot be considered statutory. The dependent claim(s) have been examined individually and in combination with the preceding claims, however they do not cure the deficiencies of claim 1; where all claims are directed to the same abstract idea, "addressing each claim of the asserted patents [is] unnecessary." Content Extraction &. Transmission LLC v, Wells Fargo Bank, Natl Ass'n, 776 F.3d 1343, 1348 (Fed. Cir. 2014). If applicant believes the dependent claims are directed towards patent eligible subject matter, they are invited to point out the specific limitations in the claim that are directed towards patent eligible subject matter. Claims for the other statutory classes are similarly analyzed. For at least these reasons, the claimed inventions of each of dependent claims 6-8, 10-12,are directed or indirect to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more and are rejected under 35 USC 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over “Fuzzy Min-Max Neural Networks-Part 1: Classification” Published 1992, [hereinafter D1] in view of “Probabilistic Embedding of Knowledge Graphs with Box Lattice Measures“ Published 2018, [hereinafter D2]. With regard to Claim 1, D1 disclose an information processing apparatus, comprising: at least one memory storing instructions; and at least one processor configured to execute the instructions (Introduction, “computer-based pattern recognition faces continuous challenge from human recognition. Humans seem to be more efficient in solving many complex classification tasks which still cannot be handled easily by computers. One of the more promising approaches to computer-based pattern recognition is the use of artificial neural networks”, P.12, Col. 2, ¶3, “It is an accepted practice that, for processes where the physical interference is not recommended or even dangerous, mathematical models and computer simulations are used to predict the consequences of some emergencies so that one might be prepared for quick response. In our case, the computer simulations were used to generate data covering 24-h period of the water distribution network operations”, neural networks and computer simulations are executed by processors using stored program instructions in memory) to: receive a plurality of Data Inputs which includes positive data being inside a rectangular pattern and negative data being outside the rectangular pattern (P. 3, Col. 2, ¶2, “The decision whether the presented input pattern belongs to a particular class or cluster, thus whether the corresponding hyperbox is to be expanded, depends mainly on the membership value describing the degree to which an input pattern fits within the hyperbox”, Col. 1, introduction, “hyperbox defines a region of the n-dimensional pattern space that has patterns with full class membership. A hyperbox is completely defined by its min point and its max point, and a membership function is defined with respect to these hyperbox min-max points”, P.3, Col.1, A. Fuzzy set definition, “… A fuzzy set (class) A in X is characterized by a membership (characteristic) function mA(x) which associates with each point in X a real number in the interval [O, 1], with the value of mA(x) representing the "grade of membership" of x in A. Thus, the nearer the value of mA(x) to unity, the higher the grade of membership of x in A … membership function mA(x) describes the degree to which the object x belongs to the set A where mA(x) = 0 represents no membership and m A ( x) = 1 represents full membership.”, P.6, Col. 1, V, “The training set D consists of a set of M ordered pairs {Xh,dh}, where xh = (xh1,Xh2- ••• ,Xhn) E 1n is the input pattern and dh E { 1, 2, • • • , m} is the index of one of the m classes. Note that Xn and An are both used to represent input patterns. The learning process begins by selecting an ordered pair from 'D and finding a hyperbox for the same class that can expand (if necessary) to include the input”) and to estimate a soft category using predetermined parameters of a position, size and margin width of a rectangular pattern (P.5, Col. 1, Fig. 3, “Fig. 3. The min-max hyperbox Bj = {Vj, Wj} in 'R,3 is shown. The min and max points are all that are required to define the hyperbox. A membership function is associated with the hyperbox that determines the degree to which any point x E 'R 3 is contained within the box. A collection of these boxes forms a pattern class”, Col. 1, introduction, “hyperbox defines a region of the n-dimensional pattern space that has patterns with full class membership. A hyperbox is completely defined by its min point and its max point, and a membership function is defined with respect to these hyperbox min-max points”, INTRODUCTION, “The min-max (hyperbox) membership function combination defines a fuzzy set, hyperbox fuzzy sets are aggregated to form a single fuzzy set class”) for classifying the Data Input as the positive data and the negative data (P.5, Col. 1, “B. Hypercube Membership Function The membership function … measure the degree to which the hth input pattern Ah falls outside of the hyperbox Bj. On a dimension by dimension basis, this can be considered a measurement of how far each component is greater (less) than the max (min) point value along each dimension that falls outside the min-max bounds of the hyperbox. Also, as b;(Ah) approaches 1, the point should be more contained by the hyperbox, with the value 1 representing complete hyperbox containment”, P.5, Eq. 1, “violations and the average amount of min point violations. The resulting membership function is defined as … Ah = ( ah1, ah2, • • • , ahn) E Jn is the hth input pattern, Vj = (vj1,v;2,--·,V;n) is the min point for B;, Wj =(wj1,Wj2,--•,Wjn) is the max point for Bj, and 'Y is the sensitivity parameter that regulates how fast the membership values decrease as the distance between Ah and Bj increases. An example of this membership function for two dimensions is shown in Fig. 5”); compare the estimated soft category label with the true Data labels for the Data Input and output a feedback on the predetermined parameters (Fig. 8, P.1, Col. 2, ¶1, “Fuzzy min-max classification neural network recall consists of computing the fuzzy union of the membership function values produced from each of the fuzzy set hyperboxes”, P.6, Col.2, “When the overlap occurs between hyperboxes that represent different classes, the overlap is eliminated using a contraction process. “, “Contraction: If overlap between hyperboxes that represent different classes does exist, eliminate the overlap by minimally adjusting each of the hyperboxes”); and modify the predetermined parameters to reduce a total loss to learn an optimal margined rectangular pattern for classifying the positive data and the negative data (P.6, Col.2, “When the overlap occurs between hyperboxes that represent different classes, the overlap is eliminated using a contraction process“, “Contraction: If overlap between hyperboxes that represent different classes does exist, eliminate the overlap by minimally adjusting each of the hyperboxes”, P.6, Col. 1, V, “The training set D consists of a set of M ordered pairs … learning process begins by selecting an ordered pair from 'D and finding a hyperbox for the same class that can expand (if necessary) to include the input”)); and penalize the rectangle pattern if the rectangle pattern covers a negative point (P.6, Col.2, “When the overlap occurs between hyperboxes that represent different classes, the overlap is eliminated using a contraction process“, “Contraction: If overlap between hyperboxes that represent different classes does exist, eliminate the overlap by minimally adjusting each of the hyperboxes”, P.6, Col. 1, V, “The training set D consists of a set of M ordered pairs … learning process begins by selecting an ordered pair from 'D and finding a hyperbox for the same class that can expand (if necessary) to include the input”), including negative class samples within a hyberbox is penalized by contraction process that shrinks the rectangular region to remove overlap), wherein the total loss is a sum of a correctness loss and a regularization loss (P.6, Col.2, “When the overlap occurs between hyperboxes that represent different classes, the overlap is eliminated using a contraction process“, “Contraction: If overlap between hyperboxes that represent different classes does exist, eliminate the overlap by minimally adjusting each of the hyperboxes”, P.6, Col. 1, V, “The training set D consists of a set of M ordered pairs … learning process begins by selecting an ordered pair from 'D and finding a hyperbox for the same class that can expand (if necessary) to include the input”), contraction reduce misclassification (correctness loss) and prevent hyperboxes from overlapping and growing unnecessarily, thereby constraining their size and complexity (simplifying the model (regularization loss))), and and wherein the at least one processor is further configured to execute the instructions to implement: terminating a training process for modifying the predetermined parameters and saving the modified parameters in a storage if a predetermined condition is met (P.5, Col. 1, Fig. 3, “Fig. 3. The min-max hyperbox Bj = {Vj, Wj} in 'R,3 is shown. The min and max points are all that are required to define the hyperbox. A membership function is associated with the hyperbox that determines the degree to which any point x E 'R 3 is contained within the box”, predetermined parameters (W=max point, V= min point) define the box position, width, and size and only the adjustable parameters during training, P. 6-7, “Given an ordered pair { X h, di,} E D, find the hyperbox Bj that provides the highest degree of membership, allows expansion (if needed), and represents the same class as dh. The degree of membership is measured using (1). The maximum size of a hyperbox is bounded above by O ≤0 s≤ 1, a user defined value. For the hyperbox Bi to expand to include Xh, P.7, Col. 2, C. Hyperbox Contraction, the following constraint must be met: … If the expansion criterion has been met for hyperbox B;, the min point of the hyperbox is adjusted using the equation … and the max point is adjusted using the equation …”, “If ∆> 0, then the ∆th dimensions of the two hyperboxes are adjusted. Only one of the n dimensions is adjusted in each of the hyperboxes to keep the hyperbox size as large as possible and minimally impact the shape of the hyperboxes being formed. It is felt that this minimal disturbance principle will provide more robust pattern classification. To determine the proper adjustment to make, the same four cases are examined.”, once overlap is removed the adjustment stop, P.6, Col. 1, V, “The training set D consists of a set of M ordered pairs … learning process begins by selecting an ordered pair from D and finding a hyperbox for the same class that can expand (if necessary) to include the input”, each data point is processed and once the expansion/contraction is done, new dimension (v,w) parameters are stored. After pass through the data, the final hyperbox definitions remained fixed as final dimension). In other words the system use the parameters represented by maximum and minimum points (i.e. V, W) to adjust to define hyperbox dimension and the termination will occur upon meeting one of the termination requirements and the hyperbox parameters (i.e. V,W) will be stored). D1 does not explicitly teach using any of an off-the-shelf gradient and a line search-based algorithm to determine the predetermined parameters to ensure that the total loss is a minimum. D2 teach wherein the Parameter Modifier includes an Optimizer which is implemented using an off the shelf gradient or a line search-based algorithm (P.6, Col. 1, 4.3, “Instead we optimize a lower bound”, Col. 2, ¶1, “Since the likelihood of the full data is usually intractable to compute as a conjunction of many negations, we optimize binary conditional and unary marginal terms separately by maximum likelihood”, ¶2, “we parametrize the boxes as (min, ∆ = max 􀀀 min), with Euclidean projections after gradient steps to keep our parameters in the unit hypercube and maintain the minimum/delta constraints”, ¶4, “training the model by minimize weighted cross-entropy with both the unary marginals and pairwise conditional probabilities”, P. 13, “optimizer: Adam”). D1 and D2 are analogous art to the claimed invention because they are from a similar field of endeavor of usage of Box based model for classification. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify D1 resulting in resolutions as disclosed by D2 with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify D1 as described above to capture anticorrelation and even disjoint concepts, while still providing the benefits of probabilistic modeling, such as the ability to perform rich joint and conditional queries over arbitrary sets of concepts, and both learning from and predicting calibrated uncertainty. (D2, P.1, Col. 1). With regard to Claim 10, D1-D2 disclose a classifier comprising a hard category estimator configured to receive input data and estimate a category of the data point using a model learned by the information processing apparatus according to Claim1 (D1, Abstract, “A supervised learning neural network classifier that utilizes fuzzy sets as pattern classes is described. Each fuzzy set is an aggregate (union) of fuzzy set hyperboxes. A fuzzy set hyperbox is an n-dimensional box defined by a min point and a max point with a corresponding membership function”, P.4, “Fig. 2. A pattern classification system is shown. An input pattern Ah is passed through each of the m discriminant functions where each discriminant function represents a pattern class. The values of the discriminant functions are compared, and the one with the largest value is used to identify the pattern class.” P.5, “Each F_c node represents a class … If a soft decision is required, the outputs are utilized directly. If a hard decision is required, the Fe node with the highest value is located and its node value is set to 1 to indicate that it is the closest pattern class, and the remaining Fe node values are set to 0. This last operation is commonly referred to as a winner-take-all response”). The same motivation to combine for claim 1 equally applies for current claim. With regard to Claims 11 and 12, Claims 11 and 12 are similar in scope to claim 1; therefore they are rejected under similar rationale. D1 further disclose a non-transitory computer readable medium storing (Col. 1, introduction, “hyperbox is completely defined by its min point and its max point, and a membership function is defined with respect to these hyperbox min-max points”, memory store data as min-max points, P.1, Col. 2, “Fuzzy min-max classification neural network recall consists of computing the fuzzy union of the membership function values produced from each of the fuzzy set hyperboxes”) a program for causing a computer to execute an information processing method (D1, P.2, Col. 2, ¶2, “Both the original fuzzy ART network and the fuzzy min-max classification neural network utilize hyperbox fuzzy sets as the fundamental computing element”, P. 9, A. k-Nearest-Neighbor Classifier “It might be that the expansion of a point (pattern exemplar) to a hyperbox has several computational advantages, with little loss in classification performance”, P. 10, Col. 2, ¶1, “fuzzy min-max neural network classifier focuses on providing the best possible classification performance with the least amount of computational effort”). Claims 6-8 are rejected under 35 U.S.C. 103 as being unpatentable over “Fuzzy Min-Max Neural Networks-Part 1: Classification” Published 1992, [hereinafter D1] in view of “Probabilistic Embedding of Knowledge Graphs with Box Lattice Measures“ Published 2018, [hereinafter D2] in view of “Partially Supervised Learning of Fuzzy Classification Rules”, Published 2004, [hereinafter D3] . With regard to Claim 6, D1-D2 teach the information processing apparatus according to Claim 1, wherein the at least one processor is further configured to implement: receiving the Data Input and estimating a soft category using multiple rectangular patterns (D1, P.5, Col. 1, Fig. 3, “Fig. 3. The min-max hyperbox Bj = {Vj, Wj} in 'R,3 is shown. The min and max points are all that are required to define the hyperbox. A membership function is associated with the hyperbox that determines the degree to which any point x E 'R 3 is contained within the box. A collection of these boxes forms a pattern class”), and based on multiple Soft Category Estimators (D1, P.5, Col. 1, Fig. 3, “Fig. 3. The min-max hyperbox Bj = {Vj, Wj} in 'R,3 is shown. The min and max points are all that are required to define the hyperbox. A membership function is associated with the hyperbox that determines the degree to which any point x E 'R 3 is contained within the box. A collection of these boxes forms a pattern class”); modifying the predetermined parameters to reduce a total loss (D1,P.6, Col.2, “When the overlap occurs between hyperboxes that represent different classes, the overlap is eliminated using a contraction process“, “Contraction: If overlap between hyperboxes that represent different classes does exist, eliminate the overlap by minimally adjusting each of the hyperboxes”) to learn optimal margined non-overlapping rectangular patterns for classifying the Data Input as the positive data and the negative data (D1, P.6, Col.2, “When the overlap occurs between hyperboxes that represent different classes, the overlap is eliminated using a contraction process. “, “Contraction: If overlap between hyperboxes that represent different classes does exist, eliminate the overlap by minimally adjusting each of the hyperboxes”). D1-D2 does not teach a Smooth Max Selector configured to perform weighted averaging of soft category estimates. D3 teach a Smooth Max Selector configured to perform weighted averaging of soft category estimates ( P. 52, “ replace minimum and maximum operation by fuzzy variants that are differentiable, e.g. the softmax PNG media_image1.png 200 400 media_image1.png Greyscale , (3.1) where α determines how closely softmax approximates max. As α approaches ∞, softmax approaches the maximum function, for α = 0, softmax calculates the mean, and if α approaches −∞, softmax approaches the minimum”). D1-D2 and D3 are analogous art to the claimed invention because they are from a similar field of endeavor of rule based fuzzy classifiers. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify D1-D2 resulting in resolutions as disclosed by D3 with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify D1-D2 as described above to develop an evolutionary algorithm and specialized fitness functions that allow semi-supervised learning of interpretable fuzzy rules (D3, P.5, ¶3). With regard to Claim 7, D1-D2-D3 teach the information processing apparatus according to Claim 6, wherein the total loss is a sum of a correctness loss, a regularization loss (D1, P.6, Col.2, “When the overlap occurs between hyperboxes that represent different classes, the overlap is eliminated using a contraction process“, “Contraction: If overlap between hyperboxes that represent different classes does exist, eliminate the overlap by minimally adjusting each of the hyperboxes”, P.6, Col. 1, V, “The training set D consists of a set of M ordered pairs … learning process begins by selecting an ordered pair from 'D and finding a hyperbox for the same class that can expand (if necessary) to include the input”), contraction reduce misclassification (correctness loss) and prevent hyperboxes from overlapping and growing unnecessarily, thereby constraining their size and complexity (simplifying the model (regularization loss))), and a Multiple Rectangle(MR) regularization loss configured to generate non-overlapping rectangular pattern (D1, P.6, Col.2, “When the overlap occurs between hyperboxes that represent different classes, the overlap is eliminated using a contraction process“, “Contraction: If overlap between hyperboxes that represent different classes does exist, eliminate the overlap by minimally adjusting each of the hyperboxes”, P.5, Col. 1, Fig. 3, “Fig. 3. The min-max hyperbox Bj = {Vj, Wj} in 'R,3 is shown. The min and max points are all that are required to define the hyperbox. A membership function is associated with the hyperbox that determines the degree to which any point x E 'R 3 is contained within the box. A collection of these boxes forms a pattern class”, D3, P. 52, “ replace minimum and maximum operation by fuzzy variants that are differentiable, e.g. the softmax PNG media_image1.png 200 400 media_image1.png Greyscale , (3.1) where α determines how closely softmax approximates max. As α approaches ∞, softmax approaches the maximum function, for α = 0, softmax calculates the mean, and if α approaches −∞, softmax approaches the minimum”). The same motivation to combine for claim 6 equally applies for current claim. With regard to Claim 8, D1-D2-D3 teach the information processing apparatus according to Claim 7, wherein the MR regularization loss includes an overlap loss and a softening loss (D1, P.6, Col.2, “When the overlap occurs between hyperboxes that represent different classes, the overlap is eliminated using a contraction process“, “Contraction: If overlap between hyperboxes that represent different classes does exist, eliminate the overlap by minimally adjusting each of the hyperboxes”, Baker, P. 52, “ replace minimum and maximum operation by fuzzy variants that are differentiable, e.g. the softmax PNG media_image1.png 200 400 media_image1.png Greyscale , (3.1) where α determines how closely softmax approximates max. As α approaches ∞, softmax approaches the maximum function, for α = 0, softmax calculates the mean, and if α approaches −∞, softmax approaches the minimum”). The same motivation to combine for claim 7 equally applies for current claim. Response to Arguments Examiner respectfully withdraw the title’s objection and claim 9 rejection based on the amendments. Applicant argue that the specification does not suggest that the claim features should be interpreted as an abstract idea (i.e. mental process). Examiner respectfully disagrees, the claims include abstract ideas as clarified in the rejection. Applicant argue that claim limitations as processing positive and negative data inputs, generate feedback by comparing estimated labels to true labels, penalize a rectangle covering negative points, minimize loss function, use specific algorithm, and terminate and save learned parameters cannot be practically Examiner respectfully disagrees, a human can do these steps as the data points can be a small number of points that a person can evaluate, provide feedback, and use an algorithm for calculation by using a computing device as a tool to perform a mental process See at least (“Using a computer as a tool to perform a mental process”, MPEP 2106.04(a)(2)(III)(C)). Applicant argue that the current rejection is consistent with the USPTO abstract idea guidance Subject Matter Eligibility Example 39. Examiner respectfully disagrees, the claim of Example 39 did not include an abstract idea and the current invention include an abstract idea therefore there is no similarity between the current claims and example 39. Applicant argue that the current claims solve a technical problems, therefore it overcome the 35 USC 101 abstract idea rejection. Examiner respectfully disagrees, the specifications ¶2-¶14 mention a problem and consider the provided claims as a solution in a conclusionary form. The specifications does not clarify the current state of the art at the time of the invention and why it create such a problem and how the current invention provide a different new approach to solve the problem. Applicant argue that the current claims integrate the abstract idea into a practical application. Examiner respectfully disagrees, training a classifier is not an integration of the abstract idea into a practical application. Applicant argue that the current claims disclose exact steps to train a model by providing core mechanics by which the rectangular pattern classifier is learned and made usable. Examiner respectfully disagrees, training a model is using an algorithm to minimize loss is not a novel form of training and the specification does not disclose what is the regular training process and how the current process is not insignificant extra-solution activity (i.e. using a new form of training or an unknown algorithm). Applicant argue that the examiner considered elements of the claim as WURC without providing support. Examiner respectfully disagrees, the office action clearly cite to MPEP 2106.05(d)(II)(i) that disclose that the court consider receiving data as a WURC. Examiner notes that receiving data was the only limitation that was considered WURC. Applicant argue that the examiner did not provide “adequate explanation”. Therefore the rejection should be withdrawn. The examiner respectfully disagrees, the examiner provided “adequate explanation” and refer the applicant to the rejection under 35 USC 101 for details. Applicant argue that D1, D2, and D3 references do not teach the claims. Examiner respectfully disagrees, the references teach the claim’s limitation and refer the applicant to the detailed rejection under 35 USC 103. As to the remaining dependent claims, applicant argue that they are allowable due to their respective direct and indirect dependencies upon one of the aforementioned Independent claims. The examiner respectfully disagrees, Independent claims were not allowable as stated in the paragraph above in this “Response to Arguments” section in this office action. Conclusion The prior art made of record and not relied upon is considered pertinent to the applicant’s disclosure. US Patent Application Publication No. 20200320371 filed by Baker that disclose “stochastic generator fills in the data space with a smooth probability distribution, reducing the tendency for a classifier to overfit.” See at least ¶131 Examiner has pointed out particular references contained in the prior arts of record in the body of this action for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and Figures may apply as well. It is respectfully requested from the applicant, in preparing the response, to consider fully the entire references as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior arts or disclosed by the examiner. It is noted that any citation to specific pages, columns, figures, or lines in the prior art references any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331-33, 216 USPQ 1038-39 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (CCPA 1968)). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMED ABOU EL SEOUD whose telephone number is (303)297-4285. The examiner can normally be reached Monday-Thursday 9:00am-6:00pm MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMED ABOU EL SEOUD/Primary Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Dec 22, 2022
Application Filed
Sep 29, 2025
Non-Final Rejection — §101, §103
Jan 02, 2026
Response Filed
Feb 01, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602602
SYSTEMS AND METHODS FOR VALIDATING FORECASTING MACHINE LEARNING MODELS
2y 5m to grant Granted Apr 14, 2026
Patent 12578719
PREDICTION OF REMAINING USEFUL LIFE OF AN ASSET USING CONFORMAL MATHEMATICAL FILTERING
2y 5m to grant Granted Mar 17, 2026
Patent 12561565
MODEL DEPLOYMENT AND OPTIMIZATION BASED ON MODEL SIMILARITY MEASUREMENTS
2y 5m to grant Granted Feb 24, 2026
Patent 12461702
METHODS AND SYSTEMS FOR PROPAGATING USER INPUTS TO DIFFERENT DISPLAYS
2y 5m to grant Granted Nov 04, 2025
Patent 12405722
USER INTERFACE DEVICE FOR INDUSTRIAL VEHICLE
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
38%
Grant Probability
77%
With Interview (+38.7%)
4y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 208 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month