DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant's claim for foreign priority based on an application filed in REPUBLIC OF INDIA on 08/14/2021. It is noted, however, that applicant has not filed a certified copy of the IN202111036918 application as required by 37 CFR 1.55.
Response to Arguments
Applicant's arguments filed 10/14/2025 have been fully considered but they are not persuasive.
Regarding applicant’s argument starting on page 28, argues that Joshi fails to disclose assigning…a plurality of scores to each of the plurality of links based on each of a plurality of predefined criteria. The rejection on page 14-15 of the last Office action, discloses an AAG representing a CAD model in which the arcs correspond to the edges between faces. Joshi teaches assigning an attribute value to each arc that represents a geometric property of the corresponding edge (cited in OA, “which share the common edge e every arc a in A, is assigned an attribute t, where t = 0, if the faces sharing the edge form a concave angle and t=1, if the faces sharing the edge form a convex angle”). Under the broadest reasonable interpretation (BRI) Joshi teaches a value ‘score’ assigned to each graph link to a corresponding edge of the product model. The score is interpreted as any numeric value assigned to the links. Applicant’s arguments rely on importing from the specification into the claim. However, claims are given their broadest reasonable interpretation and parts from the specification may not be read into the claims (see MPEP 2111.01(II)).
Regarding applicant’s argument starting on page 30, argues that Shi fails to disclose determining…a cumulative score for each of the plurality of links based on the plurality of scores assigned to the each of the plurality of links. The rejection on page 16 of the last Office action, Shi discloses computing a score using multiple geometric and topological parameters (cited in OA: The element is a real number calculated by a function as Fs=f(Ei,Vj,Lk,Fg), in which Ei, Vj, Lk, Fg are scores that are predefined according to the edge geometry (convex/concave edge), edge- vertex connectivity, and face geometry (planar/cylindrical face)). Under the broadest reasonable interpretation (BRI) cumulative score is interpreted as a score generated by combining multiple scores from the geometric and topological parameters of the CAD model. Applicant’s argument relies on computing a score for graph links, which is not the claim language in the limitation.
Regarding applicant’s argument starting on page 32, argues that Joshi fails to disclose extracting…sub-graphs from the graph by discarding one or more links from the plurality of links when the cumulative score of each of the one or more links exceeds a predefined threshold value. The rejection on page 15 of the last Office action, Joshi discloses an AAG representing a CAD model with discarding links and subgraph extraction (cited in OA: The heuristic is based on the following observation: a face that is adjacent to all its neighbouring faces with a convex angle does not form part of a feature. This is used as the basis to separate out the subgraphs that could correspond to features from the original graph. These subgraphs are analysed by the Recognizer to determine the feature types. The procedure to recognise the features is outlined below. Procedure Recognize_ Features create AAG delete nodes (and the incident arcs at the nodes) such that for each node deleted, all incident arcs have attribute '1' form components of the graph for each component Call Recognizer if recognized then return (feature_type, comprising_faces)).
Regarding applicant’s argument starting on page 34, argues that Cao fails to disclose for each of the sub-graphs, extracting…a set of node parameters and a set of edge parameters from the 3D model of the product; for each of the sub-graphs, determining…a node feature vector based on the set of node parameters and an edge feature vector based on the set of edge parameters; and for each of the sub-graphs, determining…a type of manufacturing feature based on corresponding node feature vector and the edge feature vector using a Graph Neural Network (GNN) model, wherein a confidence score is assigned to each of the subgraphs corresponding to the type of manufacturing feature. The rejection on page 12-13 of the last Office action, Cao discloses extracting geometric and topological parameters from a CAD model and generate feature vectors that are used as inputs to a GNN for feature classification (cited in OA: see previous OA for reference). Under the broadest reasonable interpretation (BRI) the node feature vector and edge feature vector include feature vectors from node and edge parameters of a graph CAD model. For the reasons stated, the rejection relies on the combination of teachings from Cao, Joshi and Shi.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-5, 7-13, and 15-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cao et al., Non-Patent Literature (“Graph Representation Of 3d Cad Models for Machining Feature Recognition with Deep Learning”) in view of Joshi et al., Non-Patent Literature (“Graph-based heuristics for recognition of machined features from a 3D solid model”) and Shi et al., Non-Patent Literature (“A Critical Review of Feature Recognition Techniques.”)
Regarding claim 1 and analogous claim 9, and 17:
Cao teaches:
generating, by a feature identification device, a graph corresponding to the product based on the 3D model of the product, (Abstract, “A concise and informative graph representation for 3D CAD models is presented. This is shown to be applicable to graph neural networks.”)
wherein the graph comprises a plurality of nodes corresponding to faces of the product and a plurality of links corresponding to edges of the product, and (Section 1.1, second paragraph, “In graph-based algorithms, the CAD shape is converted into a graph where nodes represent faces and arcs represent edges,”)
for each of the sub-graphs, extracting, by the feature identification device, a set of node parameters and a set of edge parameters from the 3D model of the product; (Section 3.1, paragraph 1, “The number of parameters is determined by the size of the feature vector of each graph node and each edge.”)
for each of the sub-graphs, determining, by the feature identification device, a node feature vector based on the set of node parameters and an edge feature vector based on the set of edge parameters; and (Section 3.1, paragraph 3, “Each node is assigned a feature vector representing the geometry of the face” and (Section 3.1, paragraph 4, “The geometry of a face can be described by an implicit equation and coefficients of the implicit equation form the feature vector. The implicit equation of a planar face can be written as Eq. (2), where a, b and c are components of its normal vector and d is its distance to the coordinate origin. The feature vector of the face is then defined as [a b c d] T.”)
for each of the sub-graphs, determining, by the feature identification device, a type of manufacturing feature based on corresponding node feature vector and the edge feature vector using a Graph Neural Network (GNN) model, wherein a confidence score is assigned to each of the subgraphs corresponding to the type of manufacturing feature. (Section 3.2, paragraph 1, “The adopted graph neural network architecture follows the Pytorch Geometric [22] implementation of dynamic graph CNN [16] for segmentation. Figure 8 shows the architecture of the GNN. The input is a graph as described in Section 3.1, where the number of nodes may vary and each node has a feature vector of length 4. GNN performs convolutions on the graph edges as described in [14]. After three edge convolutions, GNN predicts a probability vector of length 𝐹 for each node as outputs, where 𝐹 −1 is the number of feature types.”)
Cao does not explicitly teach:
wherein generating the graph comprises determining an adjacency attribute matrix from the 3D model of the product;
assigning, by the feature identification device, a plurality of scores to each of the plurality of links based on each of a plurality of predefined criteria, based on corresponding edges of the product in the 3D model of the product;
determining, by the feature identification device, a cumulative score for each of the plurality of links based on the plurality of scores assigned to the each of the plurality of links;
extracting, by the feature identification device, sub-graphs from the graph by discarding one or more links from the plurality of links when the cumulative score of each of the one or more links exceeds a predefined threshold value;
Joshi teaches:
wherein generating the graph comprises determining an adjacency attribute matrix from the 3D model of the product; (Page 59, Col 2, “The attributed adjacency graph (AAG) can be defined as a graph G = (N, A, T)” and…” Figure I shows an example of the AAG for a part. The AAG is represented in the computer in the form of a matrix.”)
assigning, by the feature identification device, a plurality of scores to each of the plurality of links based on each of a plurality of predefined criteria, based on corresponding edges of the product in the 3D model of the product; (Page 59, Col 2, “The attributed adjacency graph (AAG) can be defined as a graph G = (N, A, T) where N is the set of nodes, A is the set of arcs, and T is the set of attributes to arcs in A, such that • for every face f in F, there exists a unique node n in N • for every edge e in E, there exists a unique arc a in A, connecting the nodes n i and nj, corresponding to face fi and face fj, which share the common edge e • every arc a in A, is assigned an attribute t, where t = 0, if the faces sharing the edge form a concave angle and t = I, if the faces sharing the edge form a convex angle”)
extracting, by the feature identification device, sub-graphs from the graph by discarding one or more links from the plurality of links when the cumulative score of each of the one or more links exceeds a predefined threshold value; (Page 61, Col 2, “The heuristic is based on the following observation: a face that is adjacent to all its neighbouring faces with a convex angle does not form part of a feature. This is used as the basis to separate out the subgraphs that could correspond to features from the original graph. These subgraphs are analysed by the Recognizer to determine the feature types. The procedure to recognise the features is outlined below. Procedure Recognize_ Features create AAG delete nodes (and the incident arcs at the nodes) such that for each node deleted, all incident arcs have attribute '1' form components of the graph for each component Call Recognizer if recognized then return (feature_type, comprising_faces).”)
Joshi and Cao are both related to the same field of endeavor (i.e., computer aided design (CAD)). In view of the teachings of Joshi it would have been obvious for a person of ordinary skill in the art to apply the teachings of Joshi to Cao before the effective filing date of the claimed invention in order to improve the accuracy and efficiency of classifying manufacturing features (Joshi, Abstract, “The internal representation of the solid modeller provides a description of parts which when used directly is useful for automation of the process planning function. So that the CAD model can be used to provide the information required for manufacturing, techniques to improve machine understanding of the part as required for manufacturing are needed.”)
Shi teaches:
determining, by the feature identification device, a cumulative score for each of the plurality of links based on the plurality of scores assigned to the each of the plurality of links; (Page 887, Section 4.4.2.3, “The element is a real number calculated by a function as Fs=f(Ei,Vj,Lk,Fg), in which Ei, Vj, Lk, Fg are scores that are predefined according to the edge geometry (convex/concave edge), edge-vertex connectivity, and face geometry (planar/cylindrical face).”)
Shi and Cao are both related to the same field of endeavor (i.e., computer aided design (CAD)). In view of the teachings of Shi it would have been obvious for a person of ordinary skill in the art to apply the teachings of Shi to Cao before the effective filing date of the claimed invention in order to improve the accuracy and efficiency of classifying manufacturing features (Shi, Abstract, “Feature is an essential concept during design and manufacturing. It has universal characters for specific definitions of interest. Feature Recognition (FR) is a technique to identify and extract application-specific information from input models for downstream engineering activities. FR is a necessary and important component for the integration of Computer-Aided Design (CAD), Computer-Aided Manufacturing (CAM), Computer-Aided Engineering (CAE), and Computer-Aided Process Planning (CAPP).”)
Regarding claim 2 and analogous claim 10:
Cao, as modified by Joshi and Shi, teaches the method of claim 1.
Cao further teaches:
wherein the 3D model is a boundary representation (B- rep) based Computer Aided Design (CAD) model. (Section 3.1, paragraph 3, “a graph representation constructed from the B-Rep model is proposed as input for deep neural networks. Given the B-Rep model of a CAD shape,”)
The motivation for claim 2 is/are the same as the motivation for claim 1.
Regarding claim 3 and analogous claim 11 and 18:
Cao, as modified by Joshi and Shi, teaches the method of claim 1.
Shi further teaches:
the set of node parameters comprises a face type, face smoothness, face convexity, face area, and presence of inner loop; and (Page 887, Section 4.4.2.2, “In the input array, each element contains a vector with eight integers, which denote topology attributes such as edge type, face type, face angle type, number of loops, etc.”)
the set of edge parameters comprises an edge type, edge convexity, inner loop edge, outer loop edge, and edge angle. (Page 887, Section 4.4.2.3, “It is a compact representation in which each element corresponds to a face in the part. The element is a real number calculated by a function as Fs=f(Ei,Vj,Lk,Fg), in which Ei, Vj, Lk, Fg are scores that are predefined according to the edge geometry (convex/concave edge), edge-vertex connectivity, and face geometry (planar/cylindrical face).”)
The motivation for claim 3, 11, and 18 is/are the same as the motivation for claim 1.
Regarding claim 4 and analogous claim 12:
Cao, as modified by Joshi and Shi, teaches the method of claim 1.
Shi further teaches:
wherein the adjacency attribute matrix comprises a plurality of rows and a plurality of columns corresponding to faces of the product, and (Page 877, Paragraph 2, “transformed the part graph into a Multi-Attributed Adjacency Matrix, where each row and column represented faces and the cells were the adjacency indicated by numbers.”)
a plurality of matrix elements representing connection between two faces of the product. (Page 886, paragraph 2, “The matrix is assigned by numerical values indicating the types of vertices and edges.”)
The motivation for claim 4 and 12 is/are the same as the motivation for claim 1.
Regarding claim 5 and analogous claim 13 and 19:
Cao, as modified by Joshi and Shi, teaches the method of claim 1.
Shi further teaches:
wherein the plurality of predefined criteria comprises presence of a loop type, convexity of vertices, and neighbour convexity variation. (Page 873, Section 3.5, “The total graph explicitly represents the faces of an object and their mutual adjacency relations. [3] further developed EFG by adding a dashed arc representing vertices connectivity. The new graph was named face adjacency hyper-graph (FAG). It becomes a complete description of the hierarchical graph structure of an object, in which the nodes correspond to the object faces, and the arcs and dashed arcs represent relationships among faces induced by the edges and vertices.”)
The motivation for claim 5, 13 and 19 is/are the same as the motivation for claim 1.
Regarding claim 7 and analogous claim 15
Cao, as modified by Joshi and Shi, teaches the method of claim 1.
Cao further teaches:
wherein the GNN model uses a negative log-likelihood loss function to determine the type of manufacturing feature, and (Section 3.2, “GNN predicts a probability vector of length 𝐹 for each node as outputs, where 𝐹 −1 is the number of feature types. The output vectors are compared with ground truth labels to compute a negative log likelihood loss.”)
wherein the type of manufacturing feature comprises at least one of a pocket, a slot, a boss, a groove, and a hole. (Section 2, paragraph 1 and Table 1, “The same 24 types of machining features are used as examples to illustrate the method. The approaches described here are not limited to these 24 features only, as other types of machining features can also be processed in the same framework. Figure 2 shows some example shapes generated using the method. Each feature type is indicated by a specific color as shown in Table 1. The generated dataset is representative in the sense that all possible feature combinations are covered.”)
PNG
media_image1.png
940
379
media_image1.png
Greyscale
The motivation for claim 7 and 15 is/are the same as the motivation for claim 1.
Regarding claim 8 and analogous claim 16:
Cao, as modified by Joshi and Shi, teaches the method of claim 1.
Cao further teaches:
wherein the GNN model is trained using a dataset comprising a set of graphs that represents a plurality of manufacturing features. (Section 4, paragraph 1, “two experiments are conducted to test the ability of the graph representation methods to identify machining features in CAD models. First, the graph representation is compared with an existing voxel method. Then, the graph neural network is tested on a dataset with complex interacting features.”)
The motivation for claim 8 and 16 is/are the same as the motivation for claim 1.
Claim(s) 6, 14, and 20 is/are rejected under 35 U.S.C 103 as being unpatentable over Cao, as modified by Joshi and Shi, further in view of Li et al., Non-Patent Literature (“Learning Graph-Level Representation for Drug Discovery.”)
Regarding claim 6 and analogous claim 14:
Cao, as modified by Joshi and Shi, teaches the method of claim 1.
Cao, as modified by Joshi and Shi, does not explicitly teach:
wherein the GNN model comprises a set of graph convolution layers, a set of corresponding pooling layers, and a fully connected dense layer, and wherein each of the set of convolution layers is followed by each of the set of corresponding pooling layer.
Li teaches:
wherein the GNN model comprises a set of graph convolution layers, a set of corresponding pooling layers, and a fully connected dense layer, and wherein each of the set of convolution layers is followed by each of the set of corresponding pooling layer. (Figure 2, “We apply three graph convolution blocks (GC + RELU + BN) and two graph pooling layers, then we feed the feature of the dummy super node to a two-layer classifier.”)
PNG
media_image2.png
185
971
media_image2.png
Greyscale
A person of ordinary skill in the art would reasonably find the teachings of Li to be helpful in solving the problem of classifying manufacturing features from 3D models using a graph neural network architecture (GNN) in Cao. In view of the teachings of Li it would have been obvious for a person of ordinary skill in the art to apply the teachings of Li to Cao before the effective filing date of the claimed invention in order to apply graph convolutional layers having pooling and dense layers in the graph neural network GNN architecture (Li, Abstract, “Molecules can be represented as an undirected graph, and we can utilize graph convolution networks to predication molecular properties. However, graph convolutional networks and other graph neural networks all focus on learning node-level representation rather than graph-level representation. Previous works simply sum all feature vectors for all nodes in the graph to obtain the graph feature vector for drug predication. In this paper, we introduce a dummy super node that is connected with all nodes in the graph by a directed edge as the representation of the graph and modify the graph operation to help the dummy super node learn graph-level feature. Thus, we can handle graph-level classification and regression in the same way as node-level classification and regression.”).
Regarding claim 20:
Cao, as modified by Joshi and Shi, teaches the method of claim 1.
Cao further teaches:
the GNN model uses a negative log-likelihood loss function to determine the type of manufacturing feature, and (Section 3.2, “GNN predicts a probability vector of length 𝐹 for each node as outputs, where 𝐹 −1 is the number of feature types. The output vectors are compared with ground truth labels to compute a negative log likelihood loss.”)
wherein the type of manufacturing feature comprises at least one of a pocket, a slot, a boss, a groove, and a hole; and (Section 2, paragraph 1 and Table 1, “The same 24 types of machining features are used as examples to illustrate the method. The approaches described here are not limited to these 24 features only, as other types of machining features can also be processed in the same framework. Figure 2 shows some example shapes generated using the method. Each feature type is indicated by a specific color as shown in Table 1. The generated dataset is representative in the sense that all possible feature combinations are covered.”)
PNG
media_image1.png
940
379
media_image1.png
Greyscale
the GNN model is trained using a dataset comprising a set of graphs that represents a plurality of manufacturing features. (Section 4, paragraph 1, “two experiments are conducted to test the ability of the graph representation methods to identify machining features in CAD models. First, the graph representation is compared with an existing voxel method. Then, the graph neural network is tested on a dataset with complex interacting features.”)
Cao does not explicitly teach:
the GNN model comprises a set of graph convolution layers, a set of corresponding pooling layers, and a fully connected dense layer, and wherein each of the set of convolution layers is followed by each of the set of corresponding pooling layer;
Li further teaches:
the GNN model comprises a set of graph convolution layers, a set of corresponding pooling layers, and a fully connected dense layer, and wherein each of the set of convolution layers is followed by each of the set of corresponding pooling layer. (Figure 2, “We apply three graph convolution blocks (GC + RELU + BN) and two graph pooling layers, then we feed the feature of the dummy super node to a two-layer classifier.”)
PNG
media_image2.png
185
971
media_image2.png
Greyscale
The motivation for claim 20 is the same as the motivation for claim 6.
Conclusion
Accordingly, THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMINA BENOURAIDA whose telephone number is (571)272-4340. The examiner can normally be reached Monday-Friday 8:30am-5pm ET..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J. Huntley can be reached at 303-297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AMINA MORENO BENOURAIDA/ Examiner, Art Unit 2129
/MICHAEL J HUNTLEY/Supervisory Patent Examiner, Art Unit 2129