Prosecution Insights
Last updated: April 19, 2026
Application No. 17/846,149

MULTI-VIEW OUTLIER DETECTION FOR POTENTIAL RELATIONSHIP CAPTURE WITH PAIRED COMPARISON AVOIDANCE

Non-Final OA §101§103§112
Filed
Jun 22, 2022
Examiner
RIVERA, MARIA DE JESUS
Art Unit
2151
Tech Center
2100 — Computer Architecture & Software
Assignee
Nanjing University Of Aeronautics And Astronautics
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
4y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
10 granted / 15 resolved
+11.7% vs TC avg
Strong +35% interview lift
Without
With
+35.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
31 currently pending
Career history
46
Total Applications
across all art units

Statute-Specific Performance

§101
13.0%
-27.0% vs TC avg
§103
36.0%
-4.0% vs TC avg
§102
17.8%
-22.2% vs TC avg
§112
30.5%
-9.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 15 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Action is non-final and is in response to the claims filed June 22, 2022. Claims 1-4 are pending, of which claims 1-4 are currently rejected. Information Disclosure Statement The information disclosure statement (IDS) submitted on 01/15/2023 is in compliance with the provisions of 37 CFR 1.97. It has been placed in the application file, and the information referred to therein has been considered as to the merits. Specification The disclosure is objected to because of the following informalities: In Abstract lines 1-2 “based on the tensor representation is provided” should be “based on the tensor representation provided”. In paragraph [0006] line 1 “tensor” should be “tensors” In paragraph [0011] line 11 “by solving following” should be “by solving the following”. Claim Objections Claims 1-4 are objected to: Claim 1 line 5 “tensor” should be “multi-view tensor” in order to avoid confusion. Claim 1 Lines 7-8 “the sample matrix” should be “the transformed sample matrix” in order to avoid confusion. Claim 1 line 5 “vectorizing in the second data structure, each tensor in the first data structure” should be “vectorizing in the second data structure each tensor in the first data structure”. Claims 2-4 are also objected to based on their dependence upon claim 1, which is objected to. Claim 3 line 12 “the problem” should be “the Augmented Lagrange Multiplier problem” in order to avoid confusion. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-4 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1, at Step 1, the claim is directed to a statutory category of invention (method). At Step 2A, Prong 1, Examiner notes that the claim recites an abstract idea. Claim language recites a method for outlier detection by vectorizing tensor-based multi-view samples, constructing an objective function and calculating outlier scores by solving out the objective function. Below are the limitations of claim 1 that recite an abstract idea under mathematical concepts: transforming original multi-view samples into the first data structure as a tensor representation to form a set of multi-view tensors stored in the first data structure (mathematical concepts); vectorizing in the second data structure, each tensor in the first data structure, the vectorization producing a transformed sample matrix (mathematical concepts); calculating outlier scores of all samples according to the representation coefficient matrix and the error matrix obtained in the vectorization, so as to output outlier labels of all samples (mathematical concepts) including a set of outlier scores of all of the samples in the vectorized form of the data structure, the detected outliers capturing possible relationships among multiple views of the tensor representation while avoiding a paired comparison between the views. All limitations as indicated describe “mathematical concepts”. At Step 2A Prong 2, additional elements not reciting mathematical equations and mathematical calculations thereof include: defining first and second data structures in memory of a host computer; creating a file for storage in fixed storage of the host computer These additional elements are recited at a high level of generality to merely generally link the abstract idea to a computer system, such that the claim merely recites “apply it” in a computer. For these reasons, the additional elements, whether or alone or in combination, do not integrate the abstract idea into a practical application. At Step 2B, the additional elements do not, either alone or in combination, amount to significantly more than the recited judicial exception. As stated in at Step 2A Prong 2, the claim does no more than generally link the abstract idea to a computer system. For these reasons, claim 1 does not amount to significantly more than the abstract idea. Claim 1 is not eligible. Regarding claim 2, at Step 1, the claim is directed to a statutory category of invention (method). At Step 2A, Prong 1, Examiner notes that the claim recites an abstract idea. Below are the limitations of claim 2 that recite an abstract idea under mathematical concepts: predefining set representation D = {X^1, X^2, …, X^M} with M view data, wherein X^v ∈ R^(d_v×N), represents N samples in a vth view, and dv is a feature dimension (Chen: Pg. 8 ¶ 0012 X^v ER^(d_v x N), which represents N samples in the vth view, dv being the a feature dimension as further explained in Pg. 2 lines 30-31); and each x_i^v is normalized according to x_i^v=x_i^v||x_i^v||; constructing a corresponding multi-view tensor according to X_i=x_i^1○x_i^2○…○x_i^M∈R^(d_1×d_2×…×d_M) for each multi-view sample, to obtain the set of multi-view tensors I={X_i}_(i=1)^N, wherein Xi represents the multi-view tensor of an ith instance; and expanding each multi-view tensor X into a vector from t ∈ R d 1 d 2 … d M × 1 , to transform the set of the multi-view tensors I in to a sample matrix T = [ t 1 t 2         … t N ] ∈ R d 1 d 2 … d M × N . All limitations as indicated describe “mathematical concepts”. At Step 2A Prong 2, there are no additional elements recited in the claim. At Step 2B, there are no additional elements, either alone or in combination, that amount to significantly more than the recited judicial exception. Even when considered in combination, these additional elements do not integrate the abstract idea into a practical application. Claim 2 is not eligible. Regarding claim 3, at Step 1, the claim is directed to a statutory category of invention (method). At Step 2A, Prong 1, Examiner notes that the claim recites an abstract idea. Below are the limitations of claim 3 that recite an abstract idea under mathematical concepts: constructing the objective function for low-rank representation learning for the sample matrix T: m i n Z , E   Z * + α E 2,1   s . t .     T = T Z + E         ( 1 ) wherein Z = [ z 1 z 2         … z N ] ∈ R N × N is a representation coefficient matrix; each z i ∈ R N × 1 is a representation coefficient of a vector, E ∈ R d 1 d 2 … d M × N is the error matrix, . * represents a trace norm, and . 2,1 represents an l 2,1 norm; and solving the objective function or low-rank representation learning (l) of the sample matrix T by solving following Augmented Lagrange multiplier problem: m i n Z , E , J J , + α E 2,1 + t r Y 1 T T - T Z - E + t r Y 2 T Z - J + μ ( T - T Z - E F 2 + Z - J F 2 ) / 2 (2) wherein variables in the problem (2) are solved by an imprecise ALM algorithm. All limitations as indicated describe “mathematical concepts”. At Step 2A Prong 2, there are no additional elements recited in the claim. At Step 2B, there are no additional elements, either alone or in combination, that amount to significantly more than the recited judicial exception. Even when considered in combination, these additional elements do not integrate the abstract idea into a practical application. Claim 3 is not eligible. Regarding claim 4, at Step 1, the claim is directed to a statutory category of invention (method). At Step 2A, Prong 1, Examiner notes that the claim recites an abstract idea. Below are the limitations of claim 4 that recite an abstract idea under mathematical concepts: calculating the outlier score for each sample according to 〖o(i)=-||Z(:,i||〗_F^2+β||E(:,i)||_F^2, wherein o(i) represents an outlier of an ith instance, and β>0 is a trade-off parameter; and calculating the outlier label L according to a predefined threshold γ, after the outlier scores of the instance are calculated: if o(i)>γ, L(i) = l; otherwise, L(i)=0. All limitations as indicated describe “mathematical concepts”. At Step 2A Prong 2, there are no additional elements recited in the claim. At Step 2B, there are no additional elements, either alone or in combination, that amount to significantly more than the recited judicial exception. Even when considered in combination, these additional elements do not integrate the abstract idea into a practical application. Claim 4 is not eligible. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-4 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation “a set of outlier scores” on line 14. It is unclear if this mention of “a set of outlier scores” is the previously recited “outlier scores” on line 10 or some other set of outlier scores. For examination purposes, the set of outlier scores of line 14 will be construed to be the outlier scores of line 10. Appropriate correction is required. Because claims 2-4 depend upon claim 1, claims 2-4 are additionally rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite. Claim 2 recites the limitation “a sample matrix” on line 11. It is unclear if this mention of “a sample matrix” is the same as the recited “transformed sample matrix” of claim 1. For examination purposes, “a sample matrix” of claim 2 will be construed to be the “transformed sample matrix” of claim 1. Appropriate correction is required. Claim 3 recites the limitation “a sample matrix T” on lines 3-4. There is lack of antecedent basis for this limitation. For examination purposes, “a sample matrix T” of claim 3 will be construed to be the “transformed sample matrix” as recited in claim 1 line 6. Appropriate correction is required. Claim 3 recites the limitation “an imprecise ALM algorithm” on lines 12-13. It is not specified what this acronym “ALM” stands for within the claim, which therefore renders the claim unclear. For examination purposes, the meaning of ALM will be construed to stand for the previously recited phrase “Augmented Lagrange multiplier”. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4 are rejected under 35 U.S.C. 103 as being unpatentable over Kraus et al. (US 2020/0285737 A1) (hereinafter “Kraus”), further in view of Riddle et al. (US 2022/0382833 A1) (hereinafter “Riddle), further in view of Chen et al. (CN 112116033) included in the IDS filed on 01/15/2023 (hereinafter “Chen”). Regarding claim 1, Kraus teaches a machine learning model for detecting anomalies i.e., outliers being performed in a computer, the sequences or tensors being vectorized in order to detect anomalies using an anomaly detection code method implemented on a computer (Kraus: Fig. 4 element 408; Abstract). A first data structure (the originating structure) and the second data structure (once vectorization occurs) is defined in memory of the computer 112 in Fig. 4 as taught by Kraus. While Kraus teaches the anomaly scores i.e., outlier scores of vectorized input stored in memory (Kraus: ¶ 0275), Kraus does not explicitly teach writing anomaly scores to a file, or the method to detect the outliers as recited in claim 1. However, Riddle teaches writing anomaly scores i.e., outlier scores to a file (Riddle: ¶ 0067). It would be obvious to combine the writing of outlier scores to a file as taught by Riddle with the system as taught by Kraus as both teachings are directed towards anomaly or outlier detection. One with ordinary skill in the art would be motivated to combine the teachings because writing the outlier scores to a file would enable access across systems and software processes, thereby lessening latency (Riddle: ¶ 0067). Kraus in view of Riddle does not explicitly teach: transforming original multi-view samples into the first data structure as a tensor representation to form a set of multi-view tensors stored in the first data structure; vectorizing in the second data structure, each tensor in the first data structure, the vectorization producing a transformed sample matrix; constructing an objective function for low-rank representation learning for the sample matrix, and calculating an optimal representation coefficient matrix and error matrix, which minimize a value of the objective function; calculating outlier scores of all samples according to the representation coefficient matrix and the error matrix obtained in the vectorization, so as to output outlier labels of all samples; and, the detected outliers capturing possible relationships among multiple views of the tensor representation while avoiding a paired comparison between the views. However, Chen teaches: transforming original multi-view samples into the first data structure as a tensor representation to form a set of multi-view tensors stored in the first data structure (Chen: Pg. 2 Lines 22-23 remodeling of multi-view data into tensor representation to form a set of multi-view tensors further explained in Pg. 3 Lines 29-32); vectorizing in the second data structure, (Chen: Pg. 2 Lines 22-23 tensors are vectorized, further explained in Pg. 3 Lines 29-36) each tensor in the first data structure, the vectorization producing a transformed sample matrix (Chen: Pg. 3 Lines 23-34 vectorization of tensors used to produce transformed sample matrix; Pg. 4 Line 1 step S201 constructing of a sample matrix from vectorization); constructing an objective function for low-rank representation learning for the sample matrix (Chen: Pg. 4 Line 1 construction of sample matrix T for representing the learning target function i.e., objective function), and calculating an optimal representation coefficient matrix and error matrix (Chen: Pg 4 Lines 1-5 formulas also shown in Pg. 7 Lines 18-20 discuss the calculating of a representation coefficient matrix and error matrix for the objective function to be solved), which minimize a value of the objective function (Chen: Pg. 5 Claim 1 Lines 4-6 making the target function value i.e., value of objective function minimum); calculating outlier scores of all samples according to the representation coefficient matrix and the error matrix obtained in the vectorization (Chen: Pg 4 Lines 29-32 calculating of outlier scores for each sample relating to the ith instance, the ith instance being derived using the representation coefficient matrix and error matrix as shown in Pg. 4 Lines 3-4), so as to output outlier labels of all samples (Chen: Pg. 4 Lines 31-32 outlier labels are calculated for each of the samples); and, the detected outliers capturing possible relationships among multiple views of the tensor representation while avoiding a paired comparison between the views (Chen: Pg. 2 Lines 6-15 avoiding paired comparison between views, while also being able to capture the relationship between a plurality of views). In combining the teachings, the outlier detection method as taught by Chen would be implemented as the anomaly detection code of Kraus as shown in Fig. 4 element 408. It would be obvious to combine the outlier detecting method as taught by Chen with the system as taught by Kraus in view of Riddle as all teachings are directed towards outlier detection methods. One with ordinary skill in the art would be motivated to combine the teachings because this would allow for fully capturing the relationship of multi-faceted data using the multiple views, and so more accurately pinpointing the true outliers (Chen: Abstract). Therefore, Kraus in view of Riddle in view of Chen teaches: A multi-view outlier detection method, comprising: defining first and second data structures in memory of a host computer; transforming original multi-view samples into the first data structure as a tensor representation to form a set of multi-view tensors stored in the first data structure; vectorizing in the second data structure, each tensor in the first data structure, the vectorization producing a transformed sample matrix; constructing an objective function for low-rank representation learning for the sample matrix, and calculating an optimal representation coefficient matrix and error matrix, which minimize a value of the objective function; calculating outlier scores of all samples according to the representation coefficient matrix and the error matrix obtained in the vectorization, so as to output outlier labels of all samples; and, creating a file for storage in fixed storage of the host computer, the file including a set of outlier scores of all of the samples in the vectorized form of the data structure, the detected outliers capturing possible relationships among multiple views of the tensor representation while avoiding a paired comparison between the views. Regarding claim 2, Kraus in view of Riddle in view of Chen teaches: The multi-view outlier detection method according to claim 1, wherein the transforming step comprises: predefining set representation D = {X^1, X^2, …, X^M} with M view data (Chen: Pg. 2 Lines Lines 30-31 predefining the set D data of M view data= (X1, X2, ..., XM), which represents N samples in the v-th view, wherein the characteristic dimension is dv, formulas shown on Pg. 8 ¶ 0012 and normalizing formula shown in line 2 of ¶ 0012 on Pg. 8), wherein X^v ∈ R^(d_v×N), represents N samples in a vth view, and dv is a feature dimension (Chen: Pg. 8 ¶ 0012 X^v ER^(d_v x N), which represents N samples in the vth view, dv being the a feature dimension as furhter explained in Pg. 2 lines 30-31); and each x_i^v is normalized according to x_i^v=x_i^v||x_i^v|| (Chen: Pg. 8 [0012] X^v ER^(d_v x N), which represents N samples in the vth view, dv being the a feature dimension as further explained in Pg. 2 lines 30-31); constructing a corresponding multi-view tensor (Chen: constructing the multi-view tensor as explained on Pg. 2 towards the bottom in steps S101-S103) according to X_i=x_i^1○x_i^2○…○x_i^M∈R^(d_1×d_2×…×d_M) (Chen: Pg. 8 ¶ 0013 shows the formula to be used for constructing the multi-view tensor; Pg. 5 claim 2) for each multi-view sample, to obtain the set of multi-view tensors I={X_i}_(i=1)^N (Chen: Pg. 8 ¶ 0013 shows the formula to be used for constructing the multi-view tensor; Pg. 2 Step S103 discusses the set I for the set of multi-view tensors), wherein Xi represents the multi-view tensor of an ith instance (Chen: Pg. 2 towards the bottom of page, step S102); and expanding each multi-view tensor X into a vector from t ∈ R d 1 d 2 … d M × 1 (Chen: Pg 2 towards the bottom step S103 unfold i.e., expand each multi-view tensor into vector form, Pg. 9 ¶ 0014 step S103 shows formulas to be used for expanding tensor into vector (vectorization)), to transform the set of the multi-view tensors I in to a sample matrix (Chen: bottom of pg. 2 S103 multi-view tensor set I is converted i.e., transformed into sample matrix) T = [ t 1 t 2         … t N ] ∈ R d 1 d 2 … d M × N (Chen: Pg. 9 ¶ 0014 step S103). The motivation to combine with respect to claim 1 applies equally to claim 2. Regarding claim 3, Kraus in view of Riddle in view of Chen further teaches: The multi-view outlier detection method according to claim 1, wherein the constructing comprises: constructing the objective function for low-rank representation learning for the sample matrix T (Chen: Pg. 4 Line 1-5 constructing of sample matrix T for learning target function i.e., objective function; Pg. 9 ¶ 0017 formula): m i n Z , E   Z * + α E 2,1   s . t .     T = T Z + E         ( 1 ) (Chen: Pg. 9 ¶ 0017 minZ,E) wherein Z = [ z 1 z 2         … z N ] ∈ R N × N is a representation coefficient matrix (Chen: Pg. 4 Lines 3-4; also shown in Pg. 9 ¶ 0018); each z i ∈ R N × 1 is a representation coefficient of a vector (Chen: Pg. 4 Lines 3-4; Pg. 9 ¶ 0019), E ∈ R d 1 d 2 … d M × N is the error matrix (Chen: Pg. 4 Line 5; Pg. 9 ¶0019), . * represents a trace norm (Chen: Pg. 4 Line 5; Pg. 9 ¶ 0019), and .
Read full office action

Prosecution Timeline

Jun 22, 2022
Application Filed
Aug 21, 2022
Response after Non-Final Action
Nov 03, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596553
TECHNIQUE FOR SPECULATIVELY GENERATING AN OUTPUT VALUE IN ANTICIPATION OF ITS USE BY DOWNSTREAM PROCESSING CIRCUITRY
2y 5m to grant Granted Apr 07, 2026
Patent 12596528
MULTIPURPOSE MULTIPLY-ACCUMULATOR ARRAY
2y 5m to grant Granted Apr 07, 2026
Patent 12580553
APPARATUS, METHOD, AND PROGRAM FOR POWER STABILIZATION THROUGH ARITHMETIC PROCESSING OF DUMMY DATA
2y 5m to grant Granted Mar 17, 2026
Patent 12572619
MATRIX PROCESSING ENGINE WITH COUPLED DENSE AND SCALAR COMPUTE
2y 5m to grant Granted Mar 10, 2026
Patent 12566952
MULTIPLIER BY MULTIPLEXED OFFSETS AND ADDITION, RELATED ELECTRONIC CALCULATOR FOR THE IMPLEMENTATION OF A NEURAL NETWORK AND LEARNING METHOD
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+35.1%)
4y 4m
Median Time to Grant
Low
PTA Risk
Based on 15 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month