Prosecution Insights
Last updated: April 19, 2026
Application No. 18/388,868

MULTI-VIEW HUMAN ACTION RECOGNITION METHOD BASED ON HYPERGRAPH LEARNING

Non-Final OA §102§103§112
Filed
Nov 13, 2023
Examiner
CHAN, CAROL WANG
Art Unit
2672
Tech Center
2600 — Communications
Assignee
BEIJING UNIVERSITY OF TECHNOLOGY
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
292 granted / 351 resolved
+21.2% vs TC avg
Strong +36% interview lift
Without
With
+36.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
19 currently pending
Career history
370
Total Applications
across all art units

Statute-Specific Performance

§101
10.8%
-29.2% vs TC avg
§103
38.7%
-1.3% vs TC avg
§102
19.9%
-20.1% vs TC avg
§112
24.1%
-15.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 351 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Examiner notes that while Applicant has claimed priority to the foreign application, the foreign priority date of 11/17/2022 is NOT the effective filing date of the claimed invention since applicant has not perfected the right of priority by providing a certified translation of the priority application as the foreign application is not in English. (See slide 5 on foreign priority claim: http://www.uspto.gov/sites/default/files/aia_implementation/fitf_comprehensive_training_prior_art_under_aia.pdf) Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/15/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claim 1 is objected to because of the following informalities: Line 8 recites “the hypergraphs” which Examiner suggests amending to “the spatial hypergraphs and the temporal hypergraphs”. Appropriate correction is required. Claim 3 is objected to because of the following informalities: Line 4 recites “dividing human body” which Examiner suggests amending to “dividing a human body”, Line 5 recites “the joints of the same part in different views at the same moment” which Examiner suggests amending to “the joints of a same part in different views at a same moment”, Line 6 recites “spatial information of joints” which Examiner suggests amending to “spatial information of the joints”, and Line 8 recites “represents weight” which Examiner suggests amending to “represents a weight”. Appropriate correction is required. Claim 4 is objected to because of the following informalities: Line 2 recites “the spatial hypergraph” which Examiner suggests amending to “the spatial hypergraphs”, Line 5 recites “the matrix” which Examiner suggests amending to “the feature matrix”, Line 5 recites “joints of human” which Examiner suggests amending to “joints of the human body”, Line 7 recites “an incidence matrix” which Examiner suggest amending to “an incidence matrix H n s p a ”. Appropriate correction is required. Claim 6 is objected to because of the following informalities: Line 4 recites “the matrix” which Examiner suggests amending to “the incidence matrix”. Appropriate correction is required. Claim 10 is objected to because of the following informalities: Lines 4-7 recite “diagonal matrix which is composed of the degrees of the vertices in the n-th spatial hypergraph, and…the diagonal matrix which is composed of the degrees of the hyperedges in the n-th spatial hypergraph” which Examiner suggests amending to “diagonal matrix of the degrees of the vertices in the n-th spatial hypergraph, and …the diagonal matrix of the degrees of the hyperedges in the n-th spatial hypergraph”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 3 recites the limitation "the spatial hypergraph" in Line 2. There is insufficient antecedent basis for this limitation in the claim as it is unclear as to which spatial hypergraph is being referred to, since spatial hypergraphs (plural) is disclosed earlier in claim 1. Examiner suggests amending to “a spatial hypergraph” and has interpreted the limitation as such. Claim 4 recites the limitations "the n-th spatial hypergraph" in Line 6, “the vertices in the n-th spatial hypergraph” in Line 8, “the hyperedges in the n-th spatial hypergraph” in Line 9, “the i-th joint” in Line 12, “the n-th frame” in Line 12, “the p-th view” in Line 12, “the m-th hyperedge” in Line 12, and “the network” in Line 14. There is insufficient antecedent basis for these limitations in the claim as there is no earlier mention of an n-th spatial hypergraph, vertices in the n-th spatial hypergraph, hyperedges in the n-th spatial hypergraph, an i-th joint, an n-th frame, a p-th view, an m-th hyperedge, and it is unclear as to what network is being referred to (the hypergraph neural networks, one of the hypergraph neural networks, or another network). Examiner suggests amending the limitations to “an n-th spatial hypergraph”, “vertices in the n-th spatial hypergraph” (deleting “the”), “hyperedges in the n-th spatial hypergraph” (deleting “the”), “an i-th joint”, “an n-th frame”, “a p-th view”, “an m-th hyperedge”, and “the hypergraph neural networks”, respectively, and has interpreted the limitations as such. Claim 5 recites the limitations "the vertex set of the n-th spatial hypergraph" in Line 3, “the hyperedge set of the n-th spatial hypergraph” in Lines 3-4, and “the weight of each hyperedge in the n-th spatial hypergraph” in Lines 4-5. There is insufficient antecedent basis for these limitations in the claim as there is no earlier mention of a vertex set of the n-th spatial hypergraph, a hyperedge set of the n-th spatial hypergraph, and a weight of each hyperedge in the n-th spatial hypergraph. Examiner suggests amending the limitations to “a vertex set of the n-th spatial hypergraph”, “a hyperedge set of the n-th spatial hypergraph”, and “a weight of each hyperedge in the n-th spatial hypergraph”, respectively, and has interpreted the limitations as such. Claim 6 recites the limitation "the vertex" in Line 6. There is insufficient antecedent basis for this limitation in the claim as it is unclear as to which vertex is being referred to. Examiner suggests amending to “a vertex” or amending to clarify the limitation. Claim 7 recites the limitation "the number of hyperedges" in Lines 5-6. There is insufficient antecedent basis for this limitation in the claim as there is no earlier mention of a number of hyperedges. Examiner suggests amending to “a number of hyperedges” and has interpreted the limitation as such. Claim 8 recites the limitation "the degree d n s p a v p , n ( i )   of the vertex v p , n ( i ) ∈ V n s p a " in Lines 2-3. There is insufficient antecedent basis for this limitation in the claim as it is unclear as to which degree d n s p a v p , n ( i )   is being referred to and which vertex v p , n ( i ) ∈ V n s p a is being referred to. Examiner suggests amending the limitation to "the degrees d n s p a v p , n ( i )   of a vertex v p , n ( i ) ∈ V n s p a " and has interpreted the limitation as such. Claim 9 recites the limitation "the degree δ n s p a e m , n s p a of the hyperedge e m , n s p a ∈ ε n s p a " in Lines 2-3. There is insufficient antecedent basis for this limitation in the claim as it is unclear as to which degree δ n s p a e m , n s p a is being referred to and which hyperedge e m , n s p a ∈ ε n s p a is being referred to. Examiner suggests amending the limitation to "the degrees δ n s p a e m , n s p a of a hyperedge e m , n s p a ∈ ε n s p a " and has interpreted the limitation as such. Claim 9 also recites the limitation "wherein D e n and D v n   represent diagonal matrices of the degrees of the hyperedges and the degrees of the vertices in the n-th spatial hypergraph respectively" in Lines 5-6. There is insufficient antecedent basis for this limitation in the claim as there is no earlier mention of D e n and D v n (only in claim 10, which depends on claim 9). Examiner suggests moving this limitation into claim 10 instead. With regards to claim 10, it depends on claim 9 and thus is also rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim 1 is rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wei et al. (Dynamic Hypergraph Convolutional Networks for Skeleton-Based Action Recognition). With regards to claim 1, Wei et al. discloses a multi-view human action recognition method based on hypergraph learning, comprising acquiring video data from P views (4.1 Datasets: Para. 1 lines 1-11, “video clips”), wherein the method further comprises the following steps: step 1: pre-processing the video data (4.1 Datasets: Para. 1 lines 1-11, “video clips” “OpenPose”); step 2: constructing spatial hypergraphs based on joint information (I Introduction: Para. 5 lines 18-20, 3.2 HGCN: Para. 1 lines 1-9, Fig. 5, “static hypergraph”); step 3: constructing temporal hypergraphs based on the joint information (I Introduction: Para. 5 lines 3-6, 3.3 Dynamic joint weight of hypergraph: Para. 2 lines 1-2, 3.4 Dynamic topology of hypergraph: Para. 1 lines 1-3, Fig. 5, “dynamic hypergraph”); step 4: performing feature learning of the spatial hypergraphs and the temporal hypergraphs using hypergraph neural networks (4.2 Training details: Para. 1 lines 1-11, “training”); and step 5: extracting higher order information represented by the hypergraphs, and performing action recognition of human actions (2.3 Hypergraph neural networks: Para. 1 lines 6-8, 3.5 DHGCN: Para. 1 lines 1-11 and 14-16 and 22-26, 5 Conclusions and future work: Para. 1 lines 1-6,“final classification result”). Claim 1 is rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wang et al. (Dynamic Spatial-Temporal Hypergraph Convolutional Network for Skeleton-based Action Recognition). Applicant cannot rely upon the certified copy of the foreign priority application to overcome this rejection because a translation of said application has not been made of record in accordance with 37 CFR 1.55. When an English language translation of a non-English language foreign application is required, the translation must be that of the certified copy (of the foreign application as filed) submitted together with a statement that the translation of the certified copy is accurate. See MPEP §§ 215 and 216. With regards to claim 1, Wang et al. discloses a multi-view human action recognition method based on hypergraph learning, comprising acquiring video data from P views (4.1 Datasets: Para. 1 lines 1-13, “Northwestern-UCLA” “NTU RGB+D” “NTU RGB+D 120”), wherein the method further comprises the following steps: step 1: pre-processing the video data (4.1. Datasets: Para. 1 lines 1-13, “extracted” “training set” “test set”); step 2: constructing spatial hypergraphs based on joint information (3.2. Dynamic Spatial-temporal Hypergraph Convolution (DST-HC): Hypergraph Construction: Para. 1 lines 2-5, Para. 3 lines 1-5, Fig. 2 (c), “spatial hypergraph”); step 3: constructing temporal hypergraphs based on the joint information (3.2. Dynamic Spatial-temporal Hypergraph Convolution (DST-HC): Hypergraph Construction: Para. 1 lines 2-5, Para. 2 lines 1-6, Fig. 2 (c), “dynamic time-point hypergraph”); step 4: performing feature learning of the spatial hypergraphs and the temporal hypergraphs using hypergraph neural networks (3.2. Dynamic Spatial-temporal Hypergraph Convolution (DST-HC): Hypergraph Convolution: Para. 1 lines 1-5, 4.2. Implementation Details: Para. 1 lines 1-11, “learning”); and step 5: extracting higher order information represented by the hypergraphs, and performing action recognition of human actions (3.2. Dynamic Spatial-temporal Hypergraph Convolution (DST-HC): Hypergraph Convolution: Para. 1 lines 1-8, 3.3. High-order Information Fusion: Para. 1 lines 1-8, Table 2, “high-order information”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2 and 3 are rejected under 35 U.S.C. 103 as being unpatentable over Wei et al. (Dynamic Hypergraph Convolutional Networks for Skeleton-Based Action Recognition) in view of Xue et al. (CN 108537109, see translated version). With regards to claim 2, Wei et al. discloses the multi-view human action recognition method based on hypergraph learning according to claim 1, wherein the pre-processing of the video data comprises: segmenting the video data into N frames, extracting the joint information of each frame using Openpose, and constructing the spatial hypergraphs and the temporal hypergraphs based on the joint information (4.1 Datasets: Para. 1 lines 1-11, I Introduction: Para. 5 lines 3-6 and 18-20, 3.2 HGCN: Para. 1 lines 1-9, 3.3 Dynamic joint weight of hypergraph: Para. 2 lines 1-2, 3.4 Dynamic topology of hypergraph: Para. 1 lines 1-3, “each frame” “OpenPose” “joints”). Wei et al. does not explicitly teach storing the joint information in a json file by saving x and y coordinates of joints. However, Xue et al. discloses the concept of extracting joint information of each frame using Openpose and storing the joint information in a json file by saving x and y coordinates of joints for easy access later on (Para. 0014 lines 3-5, 0026 lines 4-12, 0028 lines 7-14, “joints” “json file”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include the concept of storing the joint information extracted using Openpose in a json file by saving x and y coordinates of joints as taught by Xue et al. into the multi-view human action recognition method of Wei et al. The motivation for this would be to allow for easier access of the joint information. With regards to claim 3, the combination of Wei et al. and Xue et al. discloses the multi-view human action recognition method based on hypergraph learning according to claim 2, wherein the spatial hypergraph is a hypergraph g s p a = V s p a , ε s p a , W s p a that is constructed according to a limb composition strategy by using the joints as vertices, dividing human body into five parts which are a trunk, a left hand, a right hand, a left leg, and a right leg, and connecting the joints of the same part in different views at the same moment using a hyperedge, and that is used to achieve an aggregation of spatial information of joints, wherein V s p a represents a vertex set of the spatial hypergraph, ε s p a represents a hyperedge set of the spatial hypergraph, and W s p a   represents weight of each hyperedge in the hyperedge set of the spatial hypergraph, which is a weight matrix (Wei et al.: 3.2 HGCN: Para. 1 lines 1-9, Fig. 3, “joints” “hyperedge” “torso” “left hand” “right hand” “left leg” “right leg” “G-h”). Claim(s) 4-9 are rejected under 35 U.S.C. 103 as being unpatentable over Wei et al. (Dynamic Hypergraph Convolutional Networks for Skeleton-Based Action Recognition) in view of Xue et al. (CN 108537109, see translated version) and further in view of Xia et al. (CN 110363236, see translated version). With regards to claim 4, the combination of Wei et al. and Xue et al. discloses the multi-view human action recognition method based on hypergraph learning according to claim 3, wherein the constructing of the spatial hypergraph comprises the following sub-steps: step 21: initializing initial vertex features of each spatial hypergraph as a feature matrix Xn, each row of the matrix being coordinates of the joints of human (3.1 GCN: Para. 2 lines 1-2, Fig. 2(b), “feature matrix”); step 22: generating the n-th spatial hypergraph g n s p a (3.2 HGCN: Para. 1 lines 1-5, “hypergraph”); step 23: constructing an incidence matrix based on the vertex set and the hyperedge set (3.2 HGCN: Para. 1 lines 3-4, Para. 2 lines 1-3, “incident matrix H”); step 24: computing degrees d n s p a v p , n ( i ) of the vertices in the n-th spatial hypergraph and degrees δ n s p a e m , n s p a of the hyperedges in the n-th spatial hypergraph, wherein d n s p a represents a function for computing the degrees of the vertices in the n-th spatial hypergraph, δ n s p a represents a function for computing the degrees of the hyperedges in the n-th spatial hypergraph, v p , n ( i )   represents the i-th joint in the n-th frame of the p-th view, and e m , n s p a   represents the m-th hyperedge in the n-th spatial hypergraph (3.2 HGCN: Para. 2 lines 4-9, “degree of node” “degree of hyperedge”); and step 25: optimizing the network using higher order information (3.2 HGCN: Para. 2 lines 11-14, 3.5 DHGCN: Para. 1 lines 1-16, “update rule”). The combination of Wei et al. and Xue et al. does not explicitly teach generating a Laplace matrix G n s p a by performing Laplace transformation of the incidence matrix H n s p a . However, Xia et al. discloses generating a Laplace matrix by performing Laplace transformation of the incidence matrix in order to obtain optimized features for clustering or classification (Para. 0113 line 1, 0122 line 1, 0124 lines 1-5, 0156 lines 2-10, “Laplacian matrix” “optimization” “features”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to include the concept of generating a Laplace matrix by performing Laplace transformation of the incidence matrix as taught by Xia et al. into the multi-view human action recognition method of the combination of Wei et al. and Xue et al. The motivation for this would be to obtain optimized features. With regards to claim 5, the combination of Wei et al., Xue et al., and Xia et al. discloses the multi-view human action recognition method based on hypergraph learning according to claim 4, wherein a calculation formula of the n-th spatial hypergraph g n s p a is: g n s p a = V n s p a , ε n s p a , W n s p a wherein   V n s p a   represents the vertex set of the n-th spatial hypergraph, ε n s p a represents the hyperedge set of the n-th spatial hypergraph, and W n s p a represents the weight of each hyperedge in the n-th spatial hypergraph, n=1, 2, ..., N (Wei et al.: 3.2 HGCN: Para. 1 lines 1-5, “Gh”). With regards to claim 6, the combination of Wei et al., Xue et al., and Xia et al. discloses the multi-view human action recognition method based on hypergraph learning according to claim 5, wherein the step 23 comprises that the incidence matrix H n s p a of the n-th spatial hypergraph represents topology of the n-th spatial hypergraph, and a corresponding element in the matrix is 1 if the vertex exists in a certain hyperedge, and 0 otherwise (Wei et al.: 3.2 HGCN: Para. 1 lines 3-4, Para. 2 lines 1-3, “incident matrix H”). With regards to claim 7, the combination of Wei et al., Xue et al., and Xia et al. discloses the multi-view human action recognition method based on hypergraph learning according to claim 6, wherein the incidence matrix of each spatial hypergraph is defined as: H n s p a v p , n ( i ) , e m , n s p a = 1     v p , n i   ∈   e m , n s p a 0     v p , n i   ∉   e m , n s p a wherein v p , n ( i ) represents the i-th joint in the n-th frame of the p-th view, and e m , n s p a   represents the m-th hyperedge in the n-th spatial hypergraph, wherein m=1, 2,..., M, and M is the number of hyperedges in a spatial hypergraph (Wei et al.: 3.2 HGCN: Para. 1 lines 3-4, Para. 2 lines 1-3, “incident matrix H”). With regards to claim 8, the combination of Wei et al., Xue et al., and Xia et al. discloses the multi-view human action recognition method based on hypergraph learning according to claim 7, wherein the step 24 comprises that a calculation formula of the degree d n s p a v p , n ( i )   of the vertex v p , n ( i ) ∈ V n s p a in the n-th spatial hypergraph is: d n s p a v p , n ( i ) =   ∑ e m , n s p a ∈ ε n s p a W n s p a e m , n s p a H n s p a v p , n ( i ) , e m , n s p a wherein W n s p a e m , n s p a is a weight vector of the hyperedge e m , n s p a (Wei et al.: 3.2 HGCN: Para. 2 lines 4-6, “degree of node”). With regards to claim 9, the combination of Wei et al., Xue et al., and Xia et al. discloses the multi-view human action recognition method based on hypergraph learning according to claim 8, wherein the step 24 further comprises that a calculation formula of the degree δ n s p a e m , n s p a of the hyperedge e m , n s p a ∈ ε n s p a in the n-th spatial hypergraph is: δ n s p a e m , n s p a   =   ∑ v p , n ( i ) ∈ V n s p a H n s p a v p , n ( i ) , e m , n s p a wherein D e n and D v n   represent diagonal matrices of the degrees of the hyperedges and the degrees of the vertices in the n-th spatial hypergraph respectively (Wei et al.: 3.2 HGCN: Para. 2 lines 7-9 and 14-15, “degree of hyperedge” “diagonal matrices”). Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Wei et al. (Dynamic Hypergraph Convolutional Networks for Skeleton-Based Action Recognition) in view of Xue et al. (CN 108537109, see translated version) and further in view of Xia et al. (CN 110363236, see translated version) and Wikipedia (Laplacian matrix). With regards to claim 10, the combination of Wei et al., Xue et al., and Xia et al. discloses the multi-view human action recognition method based on hypergraph learning according to claim 9, wherein a calculation formula of the Laplace matrix G n s p a is: G n s p a = I - D v n - 1 / 2 H n s p a W n s p a D e n - 1 H n s p a T D v n - 1 / 2 wherein D v n - 1 / 2 represents square root of an inverse matrix of the diagonal matrix which is composed of the degrees of the vertices in the n-th spatial hypergraph, and D e n - 1 represents an inverse matrix of the diagonal matrix which is composed of the degrees of the hyperedges in the n-th spatial hypergraph (Xia et al.: Para. 0115 lines 1-2, 0116 lines 1-3, 0118 lines 1-9, 0119 line 1, 0121 lines 1-2, 0122 line 1, 0124 lines 1-5, “Laplacian matrix”, see corresponding equations in Para. 0108-0119 of original version of Xia et al.). The combination of Wei et al., Xue et al., and Xia et al. does not explicitly teach G n s p a = D v n - 1 / 2 H n s p a W n s p a D e n - 1 H n s p a T D v n - 1 / 2 . However, Wikipedia discloses where the symmetrically normalized Laplacian matrix is defined as Lsym = I – D-1/2AD-1/2 and can also be written as Lsym = D-1/2BWBTD-1/2 (Page 8 Lines 1-5 and 12-13, “Lsym”). While the combination of Wei et al., Xue et al., and Xia et al. discloses the Laplacian matrix in the form G n s p a = I - D v n - 1 / 2 H n s p a W n s p a D e n - 1 H n s p a T D v n - 1 / 2 , Wikipedia teaches that the Laplacian matrix can also be written in the form G n s p a = D v n - 1 / 2 H n s p a W n s p a D e n - 1 H n s p a T D v n - 1 / 2 . Thus, the combination of Wei et al., Xue et al., and Xia et al. would be modified to substitute the Laplacian matrix G n s p a = I - D v n - 1 / 2 H n s p a W n s p a D e n - 1 H n s p a T D v n - 1 / 2 with G n s p a = D v n - 1 / 2 H n s p a W n s p a D e n - 1 H n s p a T D v n - 1 / 2 . In both cases, the symmetrically normalized Laplacian matrix is obtained. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Wei et al., Xue et al., and Xia et al. to replace the Laplacian matrix of the form G n s p a = I - D v n - 1 / 2 H n s p a W n s p a D e n - 1 H n s p a T D v n - 1 / 2 with the Laplacian matrix of the form G n s p a = D v n - 1 / 2 H n s p a W n s p a D e n - 1 H n s p a T D v n - 1 / 2   as taught by Wikipedia since one of ordinary skill in the art would have been able to carry out such a substitution and the results from the substitution would be predictable to obtain a symmetrically normalized Laplacian matrix. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Applicants are directed to consider additional pertinent prior art included on the Notice of References Cited (PTOL 892) attached herewith. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAROL W CHAN whose telephone number is (571)272-5766. The examiner can normally be reached 9:30-3:30 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CAROL W CHAN/Primary Examiner, Art Unit 2672
Read full office action

Prosecution Timeline

Nov 13, 2023
Application Filed
Nov 18, 2025
Non-Final Rejection — §102, §103, §112
Apr 13, 2026
Applicant Interview (Telephonic)
Apr 13, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579803
TOTAGRAPHY FOR SUPERRESOLUTION IMAGING AND SIGNAL PROCESSING OF POSITIVE, REAL-VALUED IMAGES AND SIGNALS
2y 5m to grant Granted Mar 17, 2026
Patent 12573205
ELECTRONIC DEVICE AND METHOD FOR VEHICLE WHICH ENHANCES DRIVING ENVIRONMENT RELATED FUNCTION
2y 5m to grant Granted Mar 10, 2026
Patent 12573240
LIGHT SOURCE SPECTRUM AND MULTISPECTRAL REFLECTIVITY IMAGE ACQUISITION METHODS AND APPARATUSES, AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12573206
BIRD’S-EYE VIEW ADAPTIVE INFERENCE RESOLUTION
2y 5m to grant Granted Mar 10, 2026
Patent 12567237
OBJECT EVALUATION METHOD, OBJECT EVALUATION DEVICE, NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+36.2%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 351 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month