DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on May 16, 2023; July 20, 2023; September 7, 2023; October 20, 2023; November 20, 2023; January 18, 2024; March 15, 2024; March 29, 2024; May 6, 2024; July 17, 2024; August 26, 2024; September 26, 2024; October 11, 2024; December 12, 2024; February 21, 2025; March 25, 2025; May 13, 2025; July 21, 2025; September 5, 2025; December 12, 2025; and on January 8, 2026, are in compliance with the provisions of 37 CFR 1.97 and has been considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental process) without significantly more.
Claim 1:
Regarding claim 1, in step 1 of the 101-analysis set forth in MPEP 2106, the claim recites
“an identity management system, comprising: a data store; a processor; a non-transitory, computer-readable storage medium, including computer instructions for: obtaining identity management data from one or more source systems in a distributed enterprise computing environment of an enterprise, the identity management data comprising data on a set of identities, a set of entitlements, or a set of roles, wherein the set of identities, set of entitlements or set of roles are utilized in identity management in the distributed enterprise computing environment; generating a first identity graph from the identity management data at a first time; training a first graph neural network for a first identity management component; and adapting the first identity management component to use the first graph neural network such that the first identity management component is adapted to generate a first identity management signal using the first graph neural network.”, and a system or machine is one of the four statutory categories of invention.
In step 2A prong 1 of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mental process but for recitation of generic computer components:
generating a first identity graph from the identity management data at a first time; (This is a mental process, a person can mentally evaluate and generate a first identity graph from the identity management data at a first time, see MPEP 2106.04(a)(2)(III)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
In step 2A prong 2 of the 101-analysis set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application:
An identity management system, comprising: a data store; a processor; a non-transitory, computer-readable storage medium, including computer instructions.. (This is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
for: obtaining identity management data from one or more source systems in a distributed enterprise computing environment of an enterprise, the identity management data comprising data on a set of identities, a set of entitlements, or a set of roles, wherein the set of identities, set of entitlements or set of roles are utilized in identity management in the distributed enterprise computing environment, (In step 2A, prong 2, obtaining identity management data recites mere data gathering, which is considered insignificant extra-solution activity – see MPEP 2106.05(g)),
training a first graph neural network for a first identity management component; (Training a graph neural network is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
and adapting the first identity management component to use the first graph neural network such that the first identity management component is adapted to generate a first identity management signal using the first graph neural network, (adapting to generate a first identity management signal considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea.
In step 2B of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above, additional elements ii, iv, and v recite mere instructions to apply the judicial exception using generic computer components, which are not indicative of significantly more. The additional element iii recites mere data gathering, and is considered to be an insignificant extra-solution activity. In step 2B, this insignificant extra-solution activity is well understood routine and conventional activity, which includes receiving or transmitting data over a network from court case Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016), – see MPEP 2106.05(d) (II)(i)),
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Claim 2:
Regarding claim 2, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 2 recites the following abstract idea:
The system of claim 1, wherein training the first graph neural network comprises: generating a first embedding from the first graph neural network; (this is considered a mental process, since a person can mentally evaluate and generate an embedding from the graph neural network, see MPEP 2106.04(a)(2)(III)),
Further, claim 2 recites the following additional element:
and training the first graph neural network based on the first embedding and a first loss function associated with the first identity management component, (In step 2A, prong 2, training the first graph neural network is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process concept grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 3:
Regarding claim 3, it is dependent upon claim 2, and thereby incorporates the limitations of, and corresponding analysis applied to claim 2. Further, claim 3 recites the following
The system of claim 2, wherein the first identity management signal is associated with clustering of the identity graph, (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 4:
Regarding claim 4, it is dependent upon claim 3, and thereby incorporates the limitations of, and corresponding analysis applied to claim 3. Further, claim 4 recites the following abstract idea,
The system of claim 3, wherein the first loss function is a spectral loss version of modularity, (This is considered a mathematical relationship, mathematical formula or equation, mathematical calculations, see specification in paragraph [0047] “for example, one area pertains to a loss function that may be utilized with GNNs for such components. According to embodiments, the rewarding mechanism to calculate the loss function may be tailored to identity governance. For example, in an access or certification request recommendation component, a model may be adapted to focus on the accuracy of recommending revocation or grant of a correct entitlement, while recommenders utilized in other contexts (such as media, etc.) may be optimized to maximize a hit rate (e.g., number of predicted like items over the liked items in ground truth data)”, see MPEP 2106.04(a)(2), subsection I),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mathematical concept but for the recitation of generic computer components, then it falls within the mathematical concept grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 5:
Regarding claim 5, it is dependent upon claim 2, and thereby incorporates the limitations of, and corresponding analysis applied to claim 2. Further, claim 5 recites the following additional elements:
The system of claim 2, comprising: training a second graph neural network based on the embedding and a second loss function associated with a second identity management component; (In step 2A, prong 2, training a second graph neural network is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
and adapting the second identity management component to generate a second identity management signal using the second graph neural network, (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 6:
Regarding claim 6, it is dependent upon claim 5, and thereby incorporates the limitations of, and corresponding analysis applied to claim 5. Further, claim 6 recites the following abstract idea:
generating a second identity graph from the updated identity management data at a second time; (this is considered a mental process, since a person can mentally evaluate and generate an identity graph from the updated data, see MPEP 2106.04(a)(2)(III))),
Further, claim 6 recites the following additional elements:
The system of claim 5, comprising: updating the identity management data; … (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
training a second graph neural network for the first identity management component; (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
and adapting the first identity management component to use the second graph neural network such that the first identity management component is adapted to generate the first identity management signal using the second graph neural network, (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process concept grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 7:
Regarding claim 7, it is dependent upon claim 6, and thereby incorporates the limitations of, and corresponding analysis applied to claim 6. Further, claim 7 recites the following abstract idea:
The system of claim 6, wherein training the second graph neural network comprises: generating a second embedding from the second graph neural network; (This is considered a mental process, since a person can mentally evaluate and generate a second embedding from the second graph neural network, see MPEP 2106.04(a)(2)(III))),
Further, claim 7 recites the following additional element:
and training the second graph neural network based on the second embedding and the first loss function associated with the first identity management component, (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process concept grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 8:
Regarding claim 8, in step 1 of the 101-analysis set forth in MPEP 2106, the claim recites “a method for identity management, comprising: obtaining identity management data from one or more source systems in a distributed enterprise computing environment of an enterprise, the identity management data comprising data on a set of identities, a set of entitlements, or a set of roles, wherein the set of identities, set of entitlements or set of roles are utilized in identity management in the distributed enterprise computing environment; generating a first identity graph from the identity management data at a first time; training a first graph neural network for a first identity management component; and adapting the first identity management component to use the first graph neural network such that the first identity management component is adapted to generate a first identity management signal using the first graph neural network,” and a method is one of the four statutory categories of invention.
In step 2A prong 1 of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mental process but for recitation of generic computer components:
generating a first identity graph from the identity management data at a first time, (this is considered a mental process, a person can mentally evaluate and generate a first identity graph from the identity management data at a first time, see MPEP 2106.04(a)(2)(III)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
In step 2A prong 2 of the 101-analysis set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application:
a method for identity management, comprising: obtaining identity management data from one or more source systems in a distributed enterprise computing environment of an enterprise, the identity management data comprising data on a set of identities, a set of entitlements, or a set of roles, wherein the set of identities, set of entitlements or set of roles are utilized in identity management in the distributed enterprise computing environment, (In step 2A, prong 2, obtaining identity management data recites mere data gathering, which is considered insignificant extra-solution activity – see MPEP 2106.05(g)),
training a first graph neural network for a first identity management component, (This is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
and adapting the first identity management component to use the first graph neural network such that the first identity management component is adapted to generate a first identity management signal using the first graph neural network, (This is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea.
In step 2B of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above, additional elements iii and iv recite mere instructions to apply the judicial exception using generic computer components, which are not indicative of significantly more. The additional element ii recites mere data gathering, and is considered insignificant extra-solution activity. In step 2B, this insignificant extra-solution activity is well understood routine and conventional activity which includes receiving or transmitting data over a network from court case Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016), – see MPEP 2106.05(d) (II)(i)),
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Claim 9:
Regarding claim 9, it is dependent upon claim 8, and thereby incorporates the limitations of, and corresponding analysis applied to claim 8. Further, claim 9 recites the following abstract idea:
The method of claim 8, wherein training the first graph neural network comprises: generating a first embedding from the first graph neural network; (this is considered a mental process, since a person can mentally evaluate and generate a first embedding from the first graph neural network, see MPEP 2106.04(a)(2)(III))),
Further, claim 9 recites the following additional element:
and training the first graph neural network based on the first embedding and a first loss function associated with the first identity management component, (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 10:
Regarding claim 10, it is dependent upon claim 9, and thereby incorporates the limitations of, and corresponding analysis applied to claim 9. Further, claim 10 recites the following additional element:
The method of claim 9, wherein the first identity management signal is associated with clustering of the identity graph. (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 11:
Regarding claim 11, it is dependent upon claim 10, and thereby incorporates the limitations of, and corresponding analysis applied to claim 10. Further, claim 11 recites the following abstract idea:
The method of claim 10, wherein the first loss function is a spectral loss version of modularity. (This is considered a mathematical relationship, mathematical formula or equation, mathematical calculations, see specification in paragraph [0047] “for example, one area pertains to a loss function that may be utilized with GNNs for such components. According to embodiments, the rewarding mechanism to calculate the loss function may be tailored to identity governance. For example, in an access or certification request recommendation component, a model may be adapted to focus on the accuracy of recommending revocation or grant of a correct entitlement, while recommenders utilized in other contexts (such as media, etc.) may be optimized to maximize a hit rate (e.g., number of predicted like items over the liked items in ground truth data),” see MPEP 2106.04(a)(2), subsection I),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mathematical concept but for the recitation of generic computer components, then it falls within the mathematical concept grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 12:
Regarding claim 12, it is dependent upon claim 9, and thereby incorporates the limitations of, and corresponding analysis applied to claim 9. Further, claim 12 recites the following additional elements:
The method of claim 9, further comprising: training a second graph neural network based on the embedding and a second loss function associated with a second identity management component; (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
and adapting the second identity management component to generate a second identity management signal using the second graph neural network, (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 13:
Regarding claim 13, it is dependent upon claim 12, and thereby incorporates the limitations of, and corresponding analysis applied to claim 12. Further, claim 13 recites the following abstract idea:
generating a second identity graph from the updated identity management data at a second time; (This is considered a mental process, since a person can mentally evaluate and generating a second identity graph from the updated identity management data at a second time, see MPEP 2106.04(a)(2)(III))),
Further, claim 13 recites the following additional elements:
The method of claim 12, further comprising: updating the identity management data; (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
training a second graph neural network for the first identity management component; (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
and adapting the first identity management component to use the second graph neural network such that the first identity management component is adapted to generate the first identity management signal using the second graph neural network, (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 14:
Regarding claim 14, it is dependent upon claim 13, and thereby incorporates the limitations of, and corresponding analysis applied to claim 13. Further, claim 14 recites the following abstract idea:
The method of claim 13, wherein training the second graph neural network comprises: generating a second embedding from the second graph neural network; (This is considered a mental process, since a person can mentally evaluate and generate a second embedding from the second graph neural network, see MPEP 2106.04(a)(2)(III))),
Further, claim 14 recites the following additional element:
and training the second graph neural network based on the second embedding and the first loss function associated with the first identity management component, (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process concept grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 15:
Claim 15 recites similar limitations as corresponding claim 1, specifically, claim 15 recites “a non-transitory computer readable medium, comprising instructions” and claim 1 recites “an identity management system, comprising: a data store; a processor; a non-transitory, computer-readable storage medium, including computer instructions” and is rejected for similar reasons as claim 1 using similar rationale as described herein above.
Claims 16-21:
Regarding claims 16-21, the claims recite similar limitations as corresponding claims 2-7, and the dependent claims 16-21 inherit the deficiencies of the respective parent claim 15, and are rejected for similar reasons using similar rationale as described herein above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2, 8, 9, 15, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Shekhar S. et al. (US PG Pub. No. US20210233080A1), published on July 29, 2021, (hereinafter, SHEKHAR), in view of Badawy M. et al (US PG Pub. No. US11227055B1), published January 18, 2022, (hereafter, BADAWY21).
Claim 1:
SHEKHAR teaches “an identity management system, comprising: a data store;
a processor; a non-transitory, computer-readable storage medium, including computer instructions for:”
See SHEKHAR describe in paragraph [0117] "as mentioned, FIG. 9 illustrates a flowchart of a series of acts 900 for determining that a digital identity is associated with a fraudulent transaction in accordance with one or more embodiments. While FIG. 9 illustrates acts according to one embodiment, alternative embodiments may omit, add to, reorder, and/or modify any of the acts shown in FIG. 9. The acts of FIG. 9 can be performed as part of a method. For example, in some embodiments, the acts of FIG. 9 can be performed, in a digital medium environment for executing digital transactions, as part of a computer-implemented method for identifying fraudulent transactions. Alternatively, a non-transitory computer-readable medium can store instructions thereon that, when executed by at least one processor, cause a computing device to perform the acts of FIG. 9. In some embodiments, a system can perform the acts of FIG. 9. For example, in one or more embodiments, a system includes at least one memory device comprising a plurality of digital identities corresponding to a plurality of digital transactions; " Here, SHEKHAR teaches a non-transitory computer-readable medium and a processor.
See SHEKHAR describe in paragraph [0108] shows graph data store: "Turning now to FIG. 8, additional detail will now be provided regarding various components and capabilities of the fraudulent transaction detection system 106. In particular, FIG. 8 illustrates the fraudulent transaction detection system 106 implemented by the server(s) 102 . As shown, the fraudulent transaction detection system 106 can include, but is not limited to, a transaction graph generator 802, a time-dependent graph convolutional neural network training engine 804 , a time-dependent graph convolutional neural network application manager 806 , a fraudulence identification engine 808 , and data storage 810 (which includes a time-dependent graph convolutional neural network 812 , digital transaction 814 , and digital identities 816 )." Here, SHEKHAR teaches a data store.
Further, SHEKHAR teaches “generating a first identity graph from the identity management data at a first time;”
See SHEKHAR in the abstract describe that “the disclosed systems can utilize a time-dependent graph convolutional neural network to generate node embeddings for the nodes based on the edge connections of the transaction graph” and later in paragraph [0119] describe “the series of acts 900 also includes an act 904 of generating a transaction graph. For example, the act 904 involves generating a transaction graph comprising edge connections between a plurality of nodes corresponding to the plurality of digital identities. More particularly, act 904 can involve generating a node for each of the plurality of digital identities and generating the edge connections between the plurality of nodes based on digital transactions linking the digital identities. In one or more embodiments, the transaction graph comprises a first node corresponding to a first digital identity associated with a set of digital transactions from the plurality of digital transactions; a second node corresponding to a second digital identity associated with the set of digital transactions from the plurality of digital transactions; and a set of edge connections between the first node and the second node, the set of edge connections corresponding to the set of digital transactions.” Further, SHEKHAR describes for the timestamps icon 304a-304h in figure 3, see paragraph [0062] "as further shown in FIG. 3, the fraudulent transaction detection system 106 can associate transaction timestamps with the edge connections 304 a - 304 h of the transaction graph 300. In particular, the transaction timestamp associated with a particular edge connection can indicate a time at which the corresponding digital transaction took place." Here, SHEKHAR teaches generating a first digital identity (i.e. first identity graph) from the plurality of digital transactions (i.e. identity management data) at a respective transaction timestamp (i.e. first time).
Further, see SHEKHAR in paragraph [0064] describe “the fraudulent transaction detection system 106 can utilize a time-dependent graph convolutional graph neural network to analyze digital identities that were not represented within the transaction graph 300 at the time of training. Thus, the fraudulent transaction detection system 106 can continuously and flexibly update the transaction graph 300 to represent new digital identities and new transactions without suffering the inefficiencies caused by retraining the model used to analyze the transaction graph 300, which would be required under many conventional systems.”
Note the examiner construes first time to mean an initial model or item created. In paragraph [0064], SHEKHAR shows generating an initial transaction graph (i.e. identity graph) from the fraudulent transaction detection system 106, which in [0062] show that the fraudulent transaction is part of the identity management data. SHEKHAR mentions in [0064] that the system flexibly updates the transaction graph and implicitly mentions creating second, third, and other subsequent identity graphs. Overall, SHEKHAR teaches creating a first identity graph from the identity management data at a first time. See paragraphs [0018], [0029,0031], [0033-0034] for more information.
Further, SHEKHAR teaches “training a first graph neural network for a first identity management component,”
See SHEKHAR in [0110] describe “the time-dependent graph convolutional neural network training engine 804 can train a time-dependent graph convolutional neural network to generate similarity probabilities for pairs of digital identities. In one or more embodiments, the time-dependent graph convolutional neural network training engine 804 trains the time-dependent graph convolutional neural network in an unsupervised setting by applying a graph-based loss function, such as a hinge loss-based loss function.” Here, SHEKHAR describes training an initial or first time-dependent graph convolutional neural network (i.e. first graph neural network) based on digital identity (which is part of a first identity management component). Note the examiner construes the definition of the identity management component to include any part that relates to creating identity graphs or graph neural networks from data from the specification paragraph [0015].
Further, SHEKHAR teaches “and adapting the first identity management component to use the first graph neural network such that the first identity management component is adapted to generate a first identity management signal using the first graph neural network.”
See SHEKHAR in paragraph [0113] describe “the time-dependent graph convolutional neural network 812 can store the time-dependent graph convolutional neural network trained by the time-dependent graph convolutional neural network training engine 804 and used by the time-dependent graph convolutional neural network application manager 806. Digital transactions 814 and digital identities 816 can store identified digital transactions and the associated digital identities, respectively. The transaction graph generator 802 can generate a transaction graph based on the data stored in digital transactions 814 and digital identities 816.” Note the examiner construes signal to be a notification or action and can be based on data stored. Here, SHEKHAR shows that the first identity management signal can be based on data stored in digital transactions and digital identities. SHEKHAR in [0113] describes adapting the initial digital identity information (i.e. first identity management component) to use the time-dependent graph convolutional neural network (i.e. first graph neural network) such that the initial digital identity information is adapted to generate a transaction graph (i.e. first identity management signal) using the first graph neural network.
However, SHEKHAR did not teach “obtaining identity management data from one or more source systems in a distributed enterprise computing environment of an enterprise, the identity management data comprising data on a set of identities, a set of entitlements, or a set of roles, wherein the set of identities, set of entitlements or set of roles are utilized in identity management in the distributed enterprise computing environment;
In an analogous art, BADAWY21 teaches “obtaining identity management data from one or more source systems in a distributed enterprise computing environment of an enterprise, the identity management data comprising data on a set of identities, a set of entitlements, or a set of roles, wherein the set of identities, set of entitlements or set of roles are utilized in identity management in the distributed enterprise computing environment;”
See BADAWY21 in column 1, lines 16-20 mention “this disclosure relates generally to computer security. In particular, this disclosure relates to the application of artificial intelligence to recommend the addition or removal of access items in a distributed and networked computing environment.” Here, BADAWY21 shows that this process applies to a distributed computing environment. See BADAWY21 in col. 15, lines 63-67 discuss “to improve the performance of the ML recommender models, the system can be based on more sophisticated models such as graph embedding and graph neural network technologies, as one skilled in the art would understand. Also, as more data is available, more sophisticated models may be desired.” This shows BADAWY21 talks about applying the methods for graph neural network models.
Further, see BADAWY21 in column 5, lines 24-34 mention “to facilitate the assignment of these entitlements, enterprises may also be provided with the ability to define roles within the context of their Identity Management solution. A role within the context of Identity Management may be a collection of entitlements. These roles may be assigned a name or identifiers (e.g., manager, engineer, team leader) by an enterprise that designate the type of user or identity that should be assigned such a role. By assigning a role to an identity in the Identity Management context, the identity may be assigned the corresponding collection of entitlements associated with the assigned role,” and later in col. 5, lines 37-41, describe “thus, by managing the identity or identities to which users within the enterprise computing environment are assigned, the entitlements which a user may assigned (e.g., the functions or access which a user may be allowed) may be controlled.” This shows that BADAWY21 teaches the roles being assigned a name or identifiers such as a manager, engineer by an enterprise designates the type of user or identity of the organization. For more information, see BADAWY21 in column 2, lines 5-16.
Overall, BADAWY21 teaches obtaining identity management data from one or more source systems in a distributed enterprise computing environment of an enterprise, the identity management data comprising data on a set of identities, a set of entitlements, or a set of roles, wherein the set of identities, set of entitlements or set of roles are utilized in identity management in the distributed enterprise computing environment.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the base reference of SHEKHAR and incorporate into the teachings of BADAWY21 because both references teach methods of using graph neural networks to organize and process data with user identity profiles.
One of ordinary skill in the art would be motivated to do so because this will "provide several advantages. Recommendations generated by ML recommender models will also not include any personal bias. A system using ML recommender models can be scaled up, as needed, and can be customized for enterprise users. A system using ML recommender models will also reduce the workload of managers and users, which traditionally, have to manually request access to entitlements. In addition, when a user is a new joiner, or a mover (described above) a system using ML recommender models will increase productivity of users and managers. For example, the recommender can teach or inform users what applications and tools are available to the users," (BADAWY21, column 14, lines 8-20).
Claim 2:
Regarding claim 2, SHEKHAR in view of BADAWY21 teaches the limitations in claim 1.
Further, SHEKHAR in view of BADAWY21 teaches “the system of claim 1, wherein training the first graph neural network comprises: generating a first embedding from the first graph neural network;”
See SHEKHAR in paragraph [0027] describe that "the time-dependent graph convolutional neural network shares learned neural network parameters when generating node embeddings, allowing the use of a previously-learned neural network parameter when generating a node embedding for a node that was added to the transaction graph after the training phase. " SHEKHAR teaches in [0027] that the training process of a graph neural network involves generating a node (relates to embedding here) from that graph neural network.
Further, see SHEKHAR in paragraph [0059] mention "the fraudulent transaction detection system 106 can generate an edge connection between a pair of nodes that correspond to a pair of digital identities associated with the digital transaction represented by that edge connection. .. in one or more embodiments, more than one digital identity corresponds to the same digital transaction. For example, a user may initiate a digital transaction by entering credit card information via a client device. The fraudulent transaction detection system 106 may then generate an edge connection corresponding to that digital transaction between a first node corresponding to a credit card identifier associated with the credit card information and a second node corresponding to a device identifier associated with the client device." Here, SHEKHAR connects the node with a node embedding, and mentions this is a first node corresponding to a transaction. From the application specification paragraphs [0015, 0019], an embedding is defined to be the "basis for training a graph neural network for one or more associated identity management components," and includes nodes or edges of the graph neural network. Thus, SHEKHAR teaches that the training comprises of generating a first embedding from a first graph neural network. See SHEKHAR in paragraph [0065] for more information.
Further, SHEKHAR in view of BADAWY21 teaches “and training the first graph neural network based on the first embedding and a first loss function associated with the first identity management component.”
See SHEKHAR in paragraph [0027] describe that "the time-dependent graph convolutional neural network shares learned neural network parameters when generating node embeddings, allowing the use of a previously-learned neural network parameter when generating a node embedding for a node that was added to the transaction graph after the training phase. " Here, SHEKHAR describes that the training process of an initial or first graph neural network involves generating a first embedding from that initial graph neural network.
See SHEKHAR in paragraph [0092] describe "the fraudulent transaction detection system 106 trains a time-dependent graph convolutional neural network to generate similarity probabilities for pairs of digital identities. In some embodiments, the fraudulent transaction detection system 106 trains the time-dependent graph convolutional neural network in an unsupervised setting by applying a graph-based loss function to learn useful and temporally predictive representation. In particular, the fraudulent transaction detection system 106 can apply the loss function to the output node representation zu, ∀u∈V". Here, SHEKHAR mentions training the initial or first graph neural network is based on an embedding which is represented by node in this section, as well as with a loss function.
Further, SHEKHAR in paragraph [0110] mentions "the time-dependent graph convolutional neural network training engine 804 can train a time-dependent graph convolutional neural network to generate similarity probabilities for pairs of digital identities. In one or more embodiments, the time-dependent graph convolutional neural network training engine 804 trains the time-dependent graph convolutional neural network in an unsupervised setting by applying a graph-based loss function, such as a hinge loss-based loss function." Here, SHEKHAR mentions that the loss function is trained on data that represents digital identities, which is part of an initial identity management component. SHEKHAR teaches training the first graph neural network based on the first embedding and a first loss function associated with the first identity management component. See SHEKHAR in paragraph [0064] for more information.
Claim 8:
Regarding claim 8, SHEKHAR teaches “generating a first identity graph from the identity management data at a first time;”
See SHEKHAR in the abstract that “the disclosed systems can utilize a time-dependent graph convolutional neural network to generate node embeddings for the nodes based on the edge connections of the transaction graph” and later in paragraph [0119] describe “the series of acts 900 also includes an act 904 of generating a transaction graph. For example, the act 904 involves generating a transaction graph comprising edge connections between a plurality of nodes corresponding to the plurality of digital identities. More particularly, act 904 can involve generating a node for each of the plurality of digital identities and generating the edge connections between the plurality of nodes based on digital transactions linking the digital identities. In one or more embodiments, the transaction graph comprises a first node corresponding to a first digital identity associated with a set of digital transactions from the plurality of digital transactions; a second node corresponding to a second digital identity associated with the set of digital transactions from the plurality of digital transactions; and a set of edge connections between the first node and the second node, the set of edge connections corresponding to the set of digital transactions.” SHEKHAR describes for the timestamps icon 304a-304h in figure 3, see paragraph [0062] "as further shown in FIG. 3, the fraudulent transaction detection system 106 can associate transaction timestamps with the edge connections 304 a - 304 h of the transaction graph 300 . In particular, the transaction timestamp associated with a particular edge connection can indicate a time at which the corresponding digital transaction took place." Here, SHEKHAR teaches generating a first digital identity (i.e. first identity graph) from the plurality of digital transactions (i.e. identity management data) at a respective transaction timestamp (i.e. first time).
Further, see SHEKHAR in paragraph [0064] describe “the fraudulent transaction detection system 106 can utilize a time-dependent graph convolutional graph neural network to analyze digital identities that were not represented within the transaction graph 300 at the time of training. Thus, the fraudulent transaction detection system 106 can continuously and flexibly update the transaction graph 300 to represent new digital identities and new transactions without suffering the inefficiencies caused by retraining the model used to analyze the transaction graph 300, which would be required under many conventional systems.” Note the examiner construes first time to mean an initial model or item created. In paragraph [0064], SHEKHAR shows generating an initial transaction graph (i.e. identity graph) from the fraudulent transaction detection system 106, which in [0062] show that the fraudulent transaction is part of the identity management data. SHEKHAR mentions in [0064] that the system flexibly updates the transaction graph and implicitly mentions creating second, third, and other subsequent identity graphs. Overall, SHEKHAR teaches creating a first identity graph from the identity management data at a first time. See the abstract, and paragraphs [0018], [0029,0031], [0033-0034] in SHEKHAR for more information.
Further, SHEKHAR teaches “training a first graph neural network for a first identity management component,”
See SHEKHAR in [0110] describe “the time-dependent graph convolutional neural network training engine 804 can train a time-dependent graph convolutional neural network to generate similarity probabilities for pairs of digital identities. In one or more embodiments, the time-dependent graph convolutional neural network training engine 804 trains the time-dependent graph convolutional neural network in an unsupervised setting by applying a graph-based loss function, such as a hinge loss-based loss function.” Here, SHEKHAR describes training an initial or first time-dependent graph convolutional neural network (i.e. first graph neural network) based on digital identity (which is part of a first identity management component). Note the examiner construes the definition of the identity management component to include any part that relates to creating identity graphs or graph neural networks from data from the specification paragraph [0015]. From the specification in paragraph [0015], the specification notes “embodiments of identity management systems and methods for their operation may utilize graph neural networks for the implementation of identity management components.” The specification defines that part of the identity management component involves a portion of any graph neural network model.
Further, SHEKHAR teaches “and adapting the first identity management component to use the first graph neural network such that the first identity management component is adapted to generate a first identity management signal using the first graph neural network.”
See SHEKHAR in paragraph [0113] describe “the time-dependent graph convolutional neural network 812 can store the time-dependent graph convolutional neural network trained by the time-dependent graph convolutional neural network training engine 804 and used by the time-dependent graph convolutional neural network application manager 806. Digital transactions 814 and digital identities 816 can store identified digital transactions and the associated digital identities, respectively. The transaction graph generator 802 can generate a transaction graph based on the data stored in digital transactions 814 and digital identities 816.” Note the examiner construes signal to be a notification or action and can be based on data stored. Here, SHEKHAR shows that the first identity management signal can be based on data stored in digital transactions and digital identities. SHEKHAR in [0113] describes adapting the initial digital identity information (i.e. first identity management component) to use the time-dependent graph convolutional neural network (i.e. first graph neural network) such that the initial digital identity information is adapted to perform an action of generating a transaction graph (i.e. first identity management signal) using the first graph neural network.
However, SHEKHAR did not teach “a method for identity management, comprising: obtaining identity management data from one or more source systems in a distributed enterprise computing environment of an enterprise, the identity management data comprising data on a set of identities, a set of entitlements, or a set of roles, wherein the set of identities, set of entitlements or set of roles are utilized in identity management in the distributed enterprise computing environment;
In an analogous art, BADAWY21 teaches “obtaining identity management data from one or more source systems in a distributed enterprise computing environment of an enterprise, the identity management data comprising data on a set of identities, a set of entitlements, or a set of roles, wherein the set of identities, set of entitlements or set of roles are utilized in identity management in the distributed enterprise computing environment;”
See BADAWY21 in column 1, lines 16-20 mention “this disclosure relates generally to computer security. In particular, this disclosure relates to the application of artificial intelligence to recommend the addition or removal of access items in a distributed and networked computing environment.” Here, BADAWY21 shows that this process applies to a distributed computing environment. See BADAWY21 in col. 15, lines 63-67 discuss “to improve the performance of the ML recommender models, the system can be based on more sophisticated models such as graph embedding and graph neural network technologies, as one skilled in the art would understand. Also, as more data is available, more sophisticated models may be desired.” This shows BADAWY21 talks about applying the methods for graph neural network models.
Further, see BADAWY21 in column 5, lines 24-34 mention “to facilitate the assignment of these entitlements, enterprises may also be provided with the ability to define roles within the context of their Identity Management solution. A role within the context of Identity Management may be a collection of entitlements. These roles may be assigned a name or identifiers (e.g., manager, engineer, team leader) by an enterprise that designate the type of user or identity that should be assigned such a role. By assigning a role to an identity in the Identity Management context, the identity may be assigned the corresponding collection of entitlements associated with the assigned role,” and later in col. 5, lines 37-41, describe “thus, by managing the identity or identities to which users within the enterprise computing environment are assigned, the entitlements which a user may assigned (e.g., the functions or access which a user may be allowed) may be controlled.” This shows that BADAWY21 teaches the roles being assigned a name or identifiers such as a manager, engineer by an enterprise designates the type of user or identity of the organization. For more information, see BADAWY21 in column 2, lines 5-16. Overall, BADAWY21 teaches obtaining identity management data from one or more source systems in a distributed enterprise computing environment of an enterprise, the identity management data comprising data on a set of identities, a set of entitlements, or a set of roles, wherein the set of identities, set of entitlements or set of roles are utilized in identity management in the distributed enterprise computing environment.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the base reference of SHEKHAR and incorporate into the teachings of BADAWY21 because both references teach methods of using graph neural networks to organize and process data with user identity profiles.
One of ordinary skill in the art would be motivated to do so because this will "provide several advantages. Recommendations generated by ML recommender models will also not include any personal bias. A system using ML recommender models can be scaled up, as needed, and can be customized for enterprise users. A system using ML recommender models will also reduce the workload of managers and users, which traditionally, have to manually request access to entitlements. In addition, when a user is a new joiner, or a mover (described above) a system using ML recommender models will increase productivity of users and managers. For example, the recommender can teach or inform users what applications and tools are available to the users," (BADAWY21, column 14, lines 8-20).
Claim 9:
Regarding claim 9, SHEKHAR in view of BADAWY21 teaches the limitations in claim 8. Referring to claim 9, the claim recites similar limitations as corresponding claim 2 and is rejected for similar reasons as claim 2 using similar teachings and rationale.
Claim 15:
Regarding claim 15, the claim recites similar limitations as claim 1, and claim 15 recites “a non-transitory computer readable medium” and is rejected for similar reasons as claim 1 using similar teachings and rationale as herein above.
Claim 16:
Regarding claim 16, SHEKHAR in view of BADAWY21 teaches the limitations in claim 15. Referring to claim 16, the claim recites similar limitations as corresponding claim 2 and is rejected for similar reasons as claim 2 using similar teachings and rationale.
Claims 3, 10, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over SHEKHAR in view of BADAWY21, and further in view of Kundu A. et al. (US PG Pub. No. US20200358796A1), published on November 12, 2020, (hereinafter, KUNDU).
Claim 3:
Regarding claim 3, SHEKHAR in view of BADAWY21, teaches the limitations in claim 2.
However, SHEKHAR in view of BADAWY21, did not teach “the system of claim 2, wherein the first identity management signal is associated with clustering of the identity graph,”
In an analogous art, KUNDU teaches “the system of claim 2, wherein the first identity management signal is associated with clustering of the identity graph.”
See KUNDU describe in paragraph [0002] "a deep-learning based method evaluates similarities of entities in decentralized identity graphs. One or more processors represent a first identity profile as a first identity graph and a second identity profile as a second identity graph. The processor(s) compare the first identity graph to the second identity graph, which are decentralized identity graphs from one or more identity networks, in order to determine a similarity score between the first identity profile and the second identity profile. Thus, the similarity score is across multiple identity profiles. The processor(s) then implement a security action based on the similarity score". Note the examiner construes first to mean initial item that is generated. Here, KUNDU teaches a security action which is seen as an identity management signal that is part of a first identity graph.
Further, KUNDU also describes in paragraph [0051] "use of graph neural networks and supervised temporal learning for dynamic similarity evaluation affords the ability to analyze identity profiles across multiple networks, which is made more effective with temporal attributes, using deep learning. This way, deep neural networks can identify behaviors and traits in graph topology over time that are similar. Based on training data, the present invention can more effectively: identify pairs of similar members; identify pairs of similar attributes; and/or identify clusters of similar members and/or attributes." KUNDU notes that similar members including clusters of members with similar attributes are part of the graph topology. KUNDU also notes in paragraph [0099] “as shown in step 6 , the frame representation 906 sends member or attribute node “frames” (latent representations of local topology and properties) to the behavior identification training 908 , which outputs (step 7 ) sequence model weights used to predict similarity scores (also using the mapping of highly similar profiles for correlation shown in step 8 ), as well as a sequence of member or attribute frames (step 9 ). As shown in step 10 , a final similarity score with linked identity profiles is then output to a requester of an identity for a particular person.” Here, KUNDU connects similarity score with clustering highly similar profiles for identity graphs, where score relates to an identity management signal. Since first is construed by the examiner as an initial creation of any model or graph, KUNDU teaches the first identity management signal is associated with clustering of the identity graph. See KUNDU describe in paragraphs [0094, 0103] for more information.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the references of SHEKHAR and BADAWY21 with the teachings of KUNDU by using the teachings of SHEKHAR and BADAWY21 of using graph neural networks to organize and process data with network graphs, and incorporate with KUNDU’s teaching of associating clustering methods with graph neural networks in organizing identity data for users.
One of ordinary skill in the art would be motivated to do so because by integrating KUNDU’s framework into the methods of SHEKHAR and BADAWY21, one with ordinary skill in the art would achieve the goal of providing “graph neural networks and supervised temporal learning for dynamic similarity evaluation affords the ability to analyze identity profiles across multiple networks, which is made more effective with temporal attributes, using deep learning. This way, deep neural networks can identify behaviors and traits in graph topology over time that are similar. Based on training data, the present invention can more effectively: identify pairs of similar members; identify pairs of similar attributes; and/or identify clusters of similar members and/or attributes.” (KUNDU, paragraph [0051]).
Claim 10:
Regarding claim 10, SHEKHAR in view of BADAWY21, teaches the limitations in claim 9. Referring to claim 10, the claim recites similar limitations as corresponding claim 3 and is rejected for similar reasons as claim 3 using similar teachings and rationale.
Claim 17:
Regarding claim 17, SHEKHAR in view of BADAWY21, teaches the limitations in claim 16. Referring to claim 17, the claim recites similar limitations as corresponding claim 3 and is rejected for similar reasons as claim 3 using similar teachings and rationale.
Claims 4, 11, 18 are rejected under 35 U.S.C. 103 as being unpatentable over SHEKHAR in view of BADAWY21, further in view of KUNDU, and further in view of BIANCHI F. et al. “Spectral clustering with graph neural networks for graph pooling,” in an international conference on machine learning, published on December 29, 2020, and available at https://arxiv.org/pdf/1907.00481.
Claim 4:
Regarding claim 4, SHEKHAR in view of BADAWY21, and further in view of KUNDU teaches the limitations in claim 3.
However, SHEKHAR in view of BADAWY21, and in further view of KUNDU did not teach “the system of claim 3, wherein the first loss function is a spectral loss version of modularity,”
In an analogous art, BIANCHI teaches “the system of claim 3, wherein the first loss function is a spectral loss version of modularity,”
See BIANCHI in page 3, section 3. Spectral Clustering with GNNs, describe "propos[ing] a GNN-based approach that addresses the aforementioned limitations of SC algorithms and that clusters the nodes according to the graph topology (nodes in the same cluster should be strongly connected) and to the node features (nodes in the same cluster should have similar features). Our method assumes that node features represent a good initialization for computing the cluster assignments...The parameters ΘGNN and ΘMLP are jointly optimized by minimizing an unsupervised loss function Lu composed of two terms, which approximates the relaxed formulation of the minCUT problem". Further, Bianchi in the abstract mentions "Spectral clustering (SC) is a popular clustering technique to find strongly connected communities on a graph. SC can be used in Graph Neural Networks (GNNs) to implement pooling operations that aggregate nodes belonging to the same cluster. However, the eigendecomposition of the Laplacian is expensive and, since clustering results are graph-specific, pooling methods based on SC must perform a new optimization for each new sample. In this paper, we propose a graph clustering approach that addresses these limitations of SC. We formulate a continuous relaxation of the normalized minCUT problem and train a GNN to compute cluster assignments that minimize this objective." Here, BIANCHI mentions any loss function associated with a graph neural network that shows a loss function with a spectral loss version of modularity, where spectral here is applied to clustering using a spectral method within graph neural network models.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the references of SHEKHAR, BADAWY21, and KUNDU, and incorporate into the teachings of BIANCHI because all references teach methods of using graph neural network models to organize and process data with network graphs and user profile information.
One of ordinary skill in the art would be motivated to do so because the model “learns the optimal trade-off between Lc and Lo, and achieves a better and more stable clustering performance,” (BIANCHI, page 7, section 5.1. Clustering the Graph Nodes).
Claim 11:
Regarding claim 11, SHEKHAR in view of BADAWY21, and further in view of KUNDU teaches the limitations in claim 10. Referring to claim 11, the claim recites similar limitations as corresponding claim 4 and is rejected for similar reasons as claim 4 using similar teachings and rationale.
Claim 18:
Regarding claim 18, SHEKHAR in view of BADAWY21, and further in view of KUNDU teaches the limitations in claim 17. Referring to claim 18, the claim recites similar limitations as corresponding claim 4 and is rejected for similar reasons as claim 4 using similar teachings and rationale.
Claims 5, 6, 7, 12, 13, 14, 19, 20 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over SHEKHAR in view of BADAWY21, further in view of Kou X. et al. (Pub No. CN112257841A), published on January 22, 2021, (hereinafter, KOU).
Claim 5:
Regarding claim 5, SHEKHAR in view of BADAWY21 teaches the limitations in claim 2.
Further, SHEKHAR teaches “the system of claim 2, comprising: training a second graph neural network based on the embedding and a second loss function associated with a second identity management component;”
See SHEKHAR in paragraphs [0117-0119] describe “in one or more embodiments, a system includes at least one memory device comprising a plurality of digital identities corresponding to a plurality of digital transactions; and a time-dependent graph convolutional neural network trained to generate similarity probabilities for nodes in transaction graphs. The system can further include at least one server device configured to cause the system to perform the acts of FIG. 9. The series of acts 900 includes an act 902 of identifying digital identities. For example, the act 902 involves identifying a plurality of digital identities corresponding to a plurality of digital transactions. In one or more embodiments, the plurality of digital identities comprises at least one of an email address, an IP address, a credit card identifier, or a device identifier. … In one or more embodiments, the transaction graph comprises a first node corresponding to a first digital identity associated with a set of digital transactions from the plurality of digital transactions; a second node corresponding to a second digital identity associated with the set of digital transactions from the plurality of digital transactions; and a set of edge connections between the first node and the second node, the set of edge connections corresponding to the set of digital transactions.” SHEKHAR mentions about having various subsequent transactions including first and second digital transactions that relates to digital identities (such as a credit card identifier or email), respectively, and this relates to first and second transaction graphs (i.e. first and second graph neural networks).
Further, see paragraph [0027] where SHEKHAR describes " the time-dependent graph convolutional neural network shares learned neural network parameters when generating node embeddings, allowing the use of a previously-learned neural network parameter when generating a node embedding for a node that was added to the transaction graph after the training phase. Accordingly, the fraudulent transaction detection system can more flexibly accommodate changes to the set of digital identities being analyzed."
In paragraph [0064], SHEKHAR describes “the fraudulent transaction detection system 106 can utilize a time-dependent graph convolutional graph neural network to analyze digital identities that were not represented within the transaction graph 300 at the time of training. Thus, the fraudulent transaction detection system 106 can continuously and flexibly update the transaction graph 300 to represent new digital identities and new transactions without suffering the inefficiencies caused by retraining the model used to analyze the transaction graph 300, which would be required under many conventional systems.” Here, SHEKHAR mentions that the graph can be updated and changed as new information enters in [0027] and that the system can continuously and flexibly update the transaction graph to represent new digital identities in [0064], and new information is used subsequently such as on second or third or subsequent models for graph neural networks.
Further, SHEKHAR describes in paragraph [0092-0093] "in one or more embodiments, the fraudulent transaction detection system 106 trains a time-dependent graph convolutional neural network to generate similarity probabilities for pairs of digital identities. In some embodiments, the fraudulent transaction detection system 106 trains the time-dependent graph convolutional neural network in an unsupervised setting by applying a graph-based loss function to learn useful and temporally predictive representation. In particular, the fraudulent transaction detection system 106 can apply the loss function to the output node representation zu , ∀u∈V and optimize the weight matrices Wk ,∀k∈{1, . . . , K}. In one or more embodiments, the fraudulent transaction detection system 106 uses a hinge loss-based loss function to maximize the similarity of representations within a context for a node while maximizing the discrimination for nodes that are beyond the node context. " Here, SHEKHAR shows training a graph neural network with an embedding and loss function, (i.e. training a second graph neural network based on the embedding and a second loss function associated with a second identity management component).
SHEKHAR also describes in paragraph [0101] that "the fraudulent transaction detection system 106 determines whether the similarity probability satisfies a probability threshold. Indeed, in some instances, the fraudulent transaction detection system 106 determines that two digital identities originate from the same entity if the similarity probability satisfies a probability threshold. Based on the similarity probability satisfying the probability threshold and that the one digital identity has a previously-determined fraudulent status, the fraudulent transaction detection system 106 can determine that the second digital identity is associated with a fraudulent entity. Accordingly, the fraudulent transaction detection system 106 can determine that the digital identity is associated with a fraudulent transaction." This shows that SHEKHAR's teachings of the process of training a second digital identity for a second graph neural network, where each transaction with a digital identity is connected to a graph neural network, and the second digital identity determined from the system 106 is associated with a second graph neural network.
Note that the examiner construes second identity management component to mean any part that is involved in plotting models for additional or subsequent graph neural networks. From the specification in paragraph [0015], the specification notes “embodiments of identity management systems and methods for their operation may utilize graph neural networks for the implementation of identity management components.” The specification defines that part of the identity management component involves a portion of a graph neural network model. Since a graph neural network can be associated with an identity management component. Here from [0101], SHEKHAR mentions that loss function applies to any graph neural network and can apply to a second identity management component. Overall, SHEKHAR teaches training a second graph neural network based on the embedding and a second loss function associated with a second identity management component. Further, for more information, see SHEKHAR in paragraphs [0094-0097, 0108].
HOWEVER, SHEKHAR in view of BADAWY21 did not explicitly teach “and adapting the second identity management component to generate a second identity management signal using the second graph neural network.”
In an analogous art, KOU teaches “and adapting the second identity management component to generate a second identity management signal using the second graph neural network.”
See KOU describe in paragraph [0065], "based on the corresponding graph embedding results, an update process is triggered to update the relation triples in the initial structure of the first graph neural network, forming a second graph neural network. Thus, it realizes the processing of dynamic data using graph neural networks, enabling continuous learning of graph neural networks to form new graph neural networks in different data, while retaining the knowledge already acquired by the initial graph neural network."
Further, see KOU mention in paragraph [0138] "therefore, it is possible to utilize different data in the data stream through the second graph neural network to execute different business processes in the corresponding usage environment. For example, a second-graph neural network can be used to perform financial information recommendation, financial product recommendation or prediction in financial information processing scenarios. Of course, it can also be used to perform corresponding question-and answer processes based on different data in news data streams". Note, an identity management component is related to models, including graph neural networks, that can perform tasks such as clustering, modeling, classification, or other related tasks for the models as defined from the specification in paragraph [0017] as well as from the specification in paragraph [0015], the specification notes “embodiments of identity management systems and methods for their operation may utilize graph neural networks for the implementation of identity management components.” The specification defines that part of the identity management component involves a portion of any graph neural network model. Note, the examiner construes an identity management signal as any information or process (such as data processing or modeling or generate notifications or recommendations) that relates to a model.
Here, KOU mentions adapting the processing of dynamic data with continuous learning (i.e. second identity management component) in paragraph [0065] to create in paragraph [0138], executing different business processes in the corresponding usage environment such as financial information recommendation (i.e. a second identity management signal) from a second graph neural network.
KOU further discusses in paragraph [0165] "referring to Figure 11, in the process of realizing intelligent question answering through the data processing method in the graph neural network provided in this application, it is known that some relevant triples of user nodes Barack Obama and Michelle Obama are mainly related to three concepts: "family", "occupation" and "location". If a new relationship triplet appears in the conversation (Michelle Obama, Daughter, Malia Ann Obama), we only need to update the information related to "family" in Barack Obama's profile. We do not need to learn and update information such as his "occupation" or "location"”. KOU here mentions applying graph neural networks to identity management by illustrating the update of relationship of user nodes within a graph, and this shows that KOU provides analogous art.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the reference of SHEKHAR, BADAWY21 and incorporate into the teachings of KOU because all references teach methods of using graph neural networks to organize and process data with user identity profiles.
One of ordinary skill in the art would be motivated to do so because the " the data processing method in the graph neural network provided in this application can not only continuously learn from new data, but also reduce the forgetting of old knowledge, which is beneficial to the user experience. Specifically, referring to Figure 8, which is a schematic diagram of the test effect of the data processing method in the graph neural network in the embodiment of the present invention, as the number of new relation triples increases, the data processing method in the graph neural network provided in this application achieves continuous learning, although the performance of the related technical models has decreased to a certain extent. However, compared with related technologies, the data processing method in the graph neural network provided by this invention has achieved significantly better results. Therefore, the data processing method in the graph neural network provided by this application is more effective in decoupling and dynamically updating relation triples in processing continuous multi-relation graph data learning, which is beneficial to improving the user experience," (KOU in paragraph [0157]).
Claim 6:
Regarding claim 6, SHEKHAR in view of BADAWY21, further in view of KOU, teaches the limitations in claim 5.
SHEKHAR in view of BADAWY21, further in view of KOU, teaches “the system of claim 5, comprising: updating the identity management data;”
See SHEKHAR in paragraphs [0022-0023] mentions "accordingly, the fraudulent transaction detection system can determine that a digital transaction associated with that digital identity is a fraudulent transaction... Conventional fraud detection systems can seek to flag possible fraudulent transactions. For instance, a conventional fraud detection system may track digital identities (e.g., email addresses, credit card identifiers, etc.) associated with digital transactions and determine whether certain digital identities originate from the same user." SHEKHAR here shows the system relates to identity management, where the data includes digital identities such as email addresses or other identifiers that is associated with a specific user.
Further, see SHEKHAR in paragraph [0053] "in some embodiments, the fraudulent transaction detection system 106 identifies an identity attribute associated with a digital identity by tracking the identity attribute. Indeed, the fraudulent transaction detection system 106 can store an identity attribute of a corresponding digital identity and update the identity attribute upon detecting a change. As an illustration, the fraudulent transaction detection system 106 can store the number of digital transactions 208 associated with the digital identity 204 and update the number of digital transactions 208 upon identifying a subsequent digital transaction corresponding to the digital identity 204". Here, SHEKHAR shows how the identity management data is updated when an attribute of a change is detected in a corresponding digital identity, which relates to a system comprising of updating the identity management data. See SHEKHAR for more information in paragraph [0064].
Further, SHEKHAR teaches “and adapting the first identity management component to use the second graph neural network such that the first identity management component is adapted to generate the first identity management signal using the second graph neural network.”
See SHEKHAR describe in paragraph [0069-0070] "for example, as shown in FIG. 4B, the time-dependent graph convolutional neural network 404 utilizes, as the input to the second neural network layer 418, node embeddings (e.g., generated by the first neural network layer 416)...In one or more embodiments, the node embeddings that make up the input to the first neural network layer 416 include the identity attributes of the digital identities corresponding to the nodes 408 a-408 c, 408 e-408 f. Thus, the time-dependent graph convolutional neural network 404 can generate the node embedding 414 a for the node 408 a based on the identity attributes of the plurality of digital identities represented in the transaction graph 402."
Further, see in paragraph [0056] where SHEKHAR describes “as shown in FIG. 2, based on the analysis of the plurality of digital identities 202, the fraudulent transaction detection system 106 determines whether a particular digital identity (e.g., the digital identity 204) corresponds to a fraudulent transaction … ,in one or more embodiments, upon determining that a digital identity corresponds to a fraudulent transaction, the fraudulent transaction detection system 106 associates a fraudulent status with the digital identity (e.g., the fraudulent status 212 associated with the digital identity 204). In some embodiments, the fraudulent transaction detection system 106 transmits a notification indicating that the digital identity corresponds to a fraudulent transaction. For example, the fraudulent transaction detection system 106 can transmit a notification to an administrator or manager of the online retail website from which the fraudulent transaction originated. The fraudulent transaction detection system 106 can also transmit a notification to a device associated with the digital identity (e.g., send an email or text message to the client device of the person named on a fraudulently used credit card indicating that their credit card was used in a fraudulent transaction).” From the plurality of digital identities which can include information from the second graph neural network, so that the first transaction information (i.e. first identity management component) sends a notification or email to a client (i.e. signal) using information from the updated second graph neural network. Note, the examiner construes an identity management signal as any information or process (such as data processing or modeling or generate notifications) that relates to a model. Here, SHEKHAR shows that an identity management component using a second graph neural network, to generate information in terms of node embeddings (i.e. first identity management signal) that becomes an input that includes identity attributes of digital identities.
Further, SHEKHAR in view of BADAWY21, further in view of KOU teaches “training a second graph neural network for the first identity management component;”
See KOU in paragraph [0146] describe “the network parameters of the second graph neural network are adjusted; until the loss function of the second graph neural network reaches the corresponding convergence condition, so as to realize the processing of data to be processed in different usage environments through the second graph neural network. For newly arrived multi-relation graph data Ti, the model can be trained iteratively on Ti and its activated adjacent relation triples. …Specifically, when the first graph neural network is a knowledge graph, based on the data to be processed in the data stream, corresponding invalid relation triples are identified as negative examples; based on the negative examples, a first loss function and a second loss function matching the second graph neural network are determined.” Here, KOU describes training a model iteratively (generate second, third or subsequent models) or relate to training a second graph neural network for the processing of data to be processed in different usage environments, where this different usage environment can include information from the first graph neural network (i.e. a first identity management component).
Further, see KOU in paragraphs [0057 - 0059] describes "the information processing module is used to determine the first knowledge graph corresponding to the target object in the question and answer information based on the corresponding question and answer information when the first graph neural network is a knowledge graph and the environment in which the first graph neural network is used is question and answer information processing. The information processing module is used to determine, based on the target object, updated data in the network information that matches the target object; The information processing module is used to decouple the first knowledge graph of the target object and update the different relation triples in the first knowledge graph based on the updated data to form a second knowledge graph, so as to respond to the question information received by the terminal through the second knowledge graph." KOU here describes that the information from the first graph neural network can be incorporated into a second graph neural network for the model training step of updating relation triples.
KOU further elaborates from paragraph [0123], that "referring to Figure 5, which is a schematic diagram of the decoupling process of the first graph neural network in an embodiment of the present invention, when the i-th training set T<sub>i</sub> appears, the first graph neural network needs to update the graph embedding representation according to these new relation triples," that the decoupling process appears as a model training step to generate a second graph neural network.
Note from the specification in paragraph [0015], the specification notes “embodiments of identity management systems and methods for their operation may utilize graph neural networks for the implementation of identity management components.” The specification defines that part of the identity management component involves a portion of any graph neural network model. Here, part of the information from the first graph neural network was used to train a second graph neural network. KOU here teaches training a second graph neural network for the first identity management component. See paragraphs [0150, 0165] for more information.
Further, SHEKHAR in view of BADAWY21, further in view of KOU, teaches
“generating a second identity graph from the updated identity management data at a second time;”
See KOU in paragraph [0168] describe "this invention, …, acquires the initial structure of a first graph neural network and the data to be processed in the data stream; in response to the acquired data to be processed, a decoupling process is triggered, and the relation triples in the first graph neural network are decoupled into multiple embedding components; based on the different embedding components that match the relation triples in the first graph neural network, graph embeddings corresponding to different embedding components are determined; based on the corresponding graph embedding results, an update process is triggered to update the relation triples in the initial structure of the first graph neural network, forming a second graph neural network. Thus, it realizes the processing of dynamic data using graph neural networks, enabling continuous learning of graph neural networks to form new graph neural networks in different data, while retaining the knowledge already acquired by the initial graph neural network. This improves the richness and foresight of data processing and enhances the user experience". Here, KOU mentions with paragraph [0168] notes “processing of dynamic data using graph neural networks, enabling continuous learning of graph neural networks to form new graph neural networks in different data” that the second graph neural network is created by an update process in the initial structure of the original graph neural network to form a second graph neural network (i.e. when the identity management data is updated at a second time). See paragraph [0120] in KOU for more details. Note, examiner interprets identity management data, defined from specification in paragraph [0010], as any information that relates to a list of users with identities or profiles delineating relationships within a group. The specification in [0010] further mentions “data generated from the modeling or analysis of identity management data of an enterprise using, for example identity graphs representing identity management data of that enterprise. .. components … may include peer group analysis component, role, identity or entitlement mining or validation components, access modeling components, risk analysis components, access recommender components, visualization components, or outlier and anomaly detection components, among others.” Note the examiner construes second time as a time where the model has updated.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the references of SHEKHAR, and BADAWY21, and incorporate into the teachings of KOU because all references teach methods of using graph neural networks to organize and process data with user profiles.
One of ordinary skill in the art would be motivated to do so because " the data processing method in the graph neural network provided in this application can not only continuously learn from new data, but also reduce the forgetting of old knowledge, which is beneficial to the user experience. Specifically, referring to Figure 8, which is a schematic diagram of the test effect of the data processing method in the graph neural network in the embodiment of the present invention, as the number of new relation triples increases, the data processing method in the graph neural network provided in this application achieves continuous learning, although the performance of the related technical models has decreased to a certain extent. However, compared with related technologies, the data processing method in the graph neural network provided by this invention has achieved significantly better results. Therefore, the data processing method in the graph neural network provided by this application is more effective in decoupling and dynamically updating relation triples in processing continuous multi-relation graph data learning, which is beneficial to improving the user experience," (KOU in paragraph [0157]).
Claim 7:
Regarding claim 7, SHEKHAR in view of BADAWY21, further in view of KOU, teaches the limitations in claim 6.
Further, KOU teaches “the system of claim 6, wherein training the second graph neural network comprises: generating a second embedding from the second graph neural network;”
See KOU in paragraph [0065] mentions that "based on the different
embedding components that match the relation triples in the first graph neural network, graph embeddings corresponding to different embedding components are determined; based on the corresponding graph embedding results, an update process is triggered to update the relation triples in the initial structure of the first graph neural network, forming a second graph neural network." Here, KOU teaches that when generating the second graph neural network, this uses updating the information from an initial or first graph neural network to generate the updated graph embedding that correspond to different embedding components (i.e. second embedding) in the second graph neural network.
Further, KOU teaches “training the second graph neural network based on the second embedding and the first loss function associated with the first identity management component,”
See KOU in paragraph [0146] describe "a first loss function and a second loss function that match the second graph neural network are determined; based on the first loss function and the second loss function, a loss function that matches the second graph neural network is determined; based on the loss function that matches the second graph neural network, the network parameters of the second graph neural network are adjusted; until the loss function of the second graph neural network reaches the corresponding convergence condition, so as to realize the processing of data to be processed in different usage environments through the second graph neural network." Further, see KOU in paragraph [0085], where KOU shows "During training, the model approximates the correct trend using objective functions such as cross-entropy."
Further, see KOU in paragraph [0144] describe that "during training, the model iteratively trains the semantic components of new relation triples and activated neighbor relation triples. Through the training process shown in the preceding steps, the graph neural network model can not only learn the embedding representation of new data, but also effectively prevent the catastrophic forgetting problem, thus ensuring the continuous learning results of the graph neural network." Here, KOU describes model training involves using objective functions, and a loss function is one example of such. Iterative here shows the model trains several times to generate a first, second, third, and subsequent graph neural network models. KOU teaches training from [0146] is based on the first loss function as well as embedding from new data [0144] (interpreted as a second embedding from the iterative training the model undergoes), until the network parameters of the second graph neural network are adjusted. See paragraph [0130] from KOU for more information.
Further, see KOU in paragraph [0055] describe “the information processing module is used to determine the sum of the first loss function, the second loss function, and the constraint loss function based on the weight hyperparameter of the regularization term corresponding to the constraint loss function, and use it as the loss function that matches the second graph neural network.” KOU further describes in [0057] "the information processing module is used to determine the first knowledge graph corresponding to the target object in the question and answer information based on the corresponding question and answer information when the first graph neural network is a knowledge graph and the environment in which the first graph neural network is used is question and answer information processing." Here, KOU shows the information processing module is part of the first identity management component and contains the first loss function, which is used to train the second graph neural network model along with a second embedding. The first loss function is from information by the information processing module which processes information related to the first graph neural network, which is connected to the first identity management component. From the specification in paragraph [0015], the specification notes “embodiments of identity management systems and methods for their operation may utilize graph neural networks for the implementation of identity management components.” The specification defines that part of the identity management component involves a portion of any graph neural network model. KOU overall teaches training the second graph neural network based on the second embedding and the first loss function associated with the first identity management component. See paragraphs [0098, 0118, 0144-0146, 0159-0164] in KOU for more information.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the references of SHEKHAR, BADAWY21 and incorporate into the teachings of KOU because all references teach methods of using graph neural networks to organize and process data with user profiles.
One of ordinary skill in the art would be motivated to do so because " the data processing method in the graph neural network provided in this application can not only continuously learn from new data, but also reduce the forgetting of old knowledge, which is beneficial to the user experience. Specifically, referring to Figure 8, which is a schematic diagram of the test effect of the data processing method in the graph neural network in the embodiment of the present invention, as the number of new relation triples increases, the data processing method in the graph neural network provided in this application achieves continuous learning, although the performance of the related technical models has decreased to a certain extent. However, compared with related technologies, the data processing method in the graph neural network provided by this invention has achieved significantly better results. Therefore, the data processing method in the graph neural network provided by this application is more effective in decoupling and dynamically updating relation triples in processing continuous multi-relation graph data learning, which is beneficial to improving the user experience," (KOU in paragraph [0157]).
Claim 12:
Regarding claim 12, SHEKHAR in view of BADAWY21 teaches the limitations in claim 9. Referring to claim 12, the claim recites similar limitations as corresponding claim 5 and is rejected for similar reasons as claim 5 using similar teachings and rationale.
Claim 13:
Regarding claim 13, SHEKHAR in view of BADAWY21, and further in view of KOU, teaches the limitations in claim 12. Referring to claim 13, the claim recites similar limitations as corresponding claim 6 and is rejected for similar reasons as claim 6 using similar teachings and rationale.
Claim 14:
Regarding claim 14, SHEKHAR in view of BADAWY21, and further in view of KOU. teaches the limitations in claim 13. Referring to claim 14, the claim recites similar limitations as corresponding claim 7 and is rejected for similar reasons as claim 7 using similar teachings and rationale.
Claim 19:
Regarding claim 19, SHEKHAR in view of BADAWY21 teaches the limitations in claim 16. Referring to claim 19, the claim recites similar limitations as corresponding claim 5 and is rejected for similar reasons as claim 5 using similar teachings and rationale.
Claim 20:
Regarding claim 20, SHEKHAR in view of BADAWY21, further in view of KOU, teaches the limitations in claim 19. Referring to claim 20, the claim recites similar limitations as corresponding claim 6 and is rejected for similar reasons as claim 6 using similar teachings and rationale.
Claim 21:
Regarding claim 21, SHEKHAR in view of BADAWY21 and further in view of KOU, teaches the limitations in claim 20. Referring to claim 21, the claim recites similar limitations as corresponding claim 7 and is rejected for similar reasons as claim 7 using similar teachings and rationale.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WENWEI ZENG whose telephone number is (571)272-7111. The examiner can normally be reached Monday-Friday, 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed can be reached at (571) 272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WenWei Zeng/Examiner, Art Unit 2146
/USMAAN SAEED/Supervisory Patent Examiner, Art Unit 2146