DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application is being examined under the pre-AIA first to invent provisions.
Terminal Disclaimer
The terminal disclaimer filed on 11/13/2025 disclaiming the terminal portion of any patent granted on this application which would extend beyond the expiration date of USPNs:
8,793,283
9,652,876
10,504,255
11,263,265
11,698,931
12,277,174
has been reviewed and is accepted. The terminal disclaimer has been recorded.
Response to Amendment
Applicant’s Response, filed 11/13/2025, amended claims 1-14. Claims 1-20 are pending.
The six double patenting rejections have been withdrawn due to the approval of the filed terminal disclaimer.
Response to Arguments
Applicant's arguments regarding the §101 rejection have been fully considered but they are not persuasive. Regarding the Step 2A Prong One argument, nothing prevents a label propagation algorithm from being performed mentally or a human using pen and paper because an algorithm is simply an organized series of steps or operations. Regarding the Step 2A Prong Two arguments of the newly added limitations, these generic computing hardware elements and data saving steps are analyzed in the updated rejection under Prong Two below.
Applicant’s arguments with respect to the § 103 rejection of the amended claims have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, as discussed in the 11/03/2025 examiner interview, a new ground of rejection is made in view of the Dean reference cited in the IDS filed 01/10/2025.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to the judicial exception of an abstract idea without significantly more.
Step 1
The claims recite a method and a system (claims 1 & 11). These claims fall within at least one of the four categories of patentable subject matter.
Step 2A Prong One
Claim 1 recites “the graph data representing a graph comprising a plurality of vertices connected by a plurality of edges representing relationships among the plurality of vertices, each respective vertex of the plurality of vertices comprising a set of label values indicating a strength of association between the respective vertex and a set of labels, each respective edge comprising a set of label weights for influencing label values traversing the respective edge; and
executing a label propagation algorithm for the plurality of vertices in the graph for a series of synchronized supersteps, wherein, for a respective superstep of the series of synchronized supersteps at a respective vertex of the plurality of vertices, the respective superstep being an iteration that included ordered stages of computation for each vertex, executing the label propagation algorithm comprises: updating a respective label value in the set of label values based on the weighted label value; determining that the set of label values has converged; and responsive to determining that the set of label values has converged, terminating execution of the label propagation algorithm.”
These steps analyze information which has been received and calculates a value based on the received input to produce an output, which is an act of evaluating information that can be practically performed in the human mind. Thus, these steps are an abstract idea in the “mental process” grouping.
Claims 5-9 recite limitations that are further extensions of the identified grouping. Thus, these steps are an abstract idea in the “mental process” grouping. Claims 11-20 correspond to claims 1-10, respectively.
Step 2A Prong Two
This judicial exception is not integrated into a practical application because the combination of additional elements includes only generic computer elements which do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer. These elements are: data processing hardware, persistent storage, message module, message queue, coordinating system, worker system, memory hardware, and distributed computing system.
Claim 1 recites “obtaining, by data processing hardware, graph data from a distributed computing system; saving, at a beginning of the respective superstep, a state of a partition of the graph data to persistent storage, wherein the partition of the graph data is associated with the respective vertex; and receiving an incoming message comprising a weighted label value”.
These limitations recite steps which amount to insignificant extra-solution activity of data gathering, such as receiving input, transmitting output, and updating/modifying data.
Claims 2-4 & 10 recite steps which amount to insignificant extra-solution activity of data gathering, such as receiving input, transmitting output, and updating/modifying data. Claims 11-20 correspond to claims 1-10, respectively.
Step 2B
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the recitations of generic computer components performing generic computer functions at a high level of generality do not meaningfully limit the claim. Further, the insignificant extra-solution activities of data gathering and presentation do not meaningfully limit the claim.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a).
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over:
Baluja et al. (PGPUB citation No. 1 in the IDS filed 01/10/2025, US 2008/0275861 A1, hereinafter “Baluja”) in view of
“Lecture 7: Two Layer Artificial Neural Networks” by Simon Colton (NPL citation No. 49 in the IDS filed 01/10/2025, published in 2006, hereinafter “Colton”), and further in view of
“MapReduce: Simplified Data Processing on Large Clusters” by Dean & Ghemawat (NPL citation No. 19 in the IDS filed 01/10/2025, published in 2009, hereinafter “Dean”).
Baluja teaches:
1. A method comprising:
obtaining, by data processing hardware, graph data from a distributed computing system [Baluja, Figs. 1 & 6A and ¶¶ 0038-0039 & 0124], the graph data representing a graph comprising a plurality of vertices connected by a plurality of edges representing relationships among the plurality of vertices [Baluja, Figs. 1 & 6A and ¶¶ 0038-0039 & 0124], each respective vertex of the plurality of vertices comprising a set of label values indicating a strength of association between the respective vertex and a set of labels [Baluja, Fig. 6A and ¶¶ 0091-0093], each respective edge comprising a set of label weights for influencing label values traversing the respective edge [Baluja, ¶ 0076, label value based on weight associated with edge].
Baluja does not explicitly teach: executing, by the data processing hardware, a label propagation algorithm for the plurality of vertices in the graph for a series of synchronized supersteps; receiving an incoming message comprising a weighted label value; updating a respective label value in the set of label values based on the weighted label value; determining that the set of label values has converged; and responsive to determining that the set of label values has converged, terminating execution of the label propagation algorithm.
However, Colton teaches:
executing, by the data processing hardware, a label propagation algorithm for the plurality of vertices in the graph for a series of synchronized supersteps [Colton, page 4, § 7.3, “Perceptrons” section],
receiving an incoming message comprising a weighted label value [Colton, pages 4 & 5, “Units” section’s first paragraph and figure at top of page 5, “user-defined parameter” corresponds to one of the different possibilities for a unit function, such as one with a coefficient];
updating a respective label value in the set of label values based on the weighted label value [Colton, pages 4 & 5, “Example” section spanning both pages, showing value of unit “S” being the sum of the weighted incoming units];
determining that the set of label values has converged [Colton, pages 4 & 5, “Example” section spanning both pages, showing value of unit “S” being the sum of the weighted incoming units]; and
responsive to determining that the set of label values has converged, terminating execution of the label propagation algorithm [Colton, pages 4 & 5, “Example” section spanning both pages, showing value of unit “S” being the sum of the weighted incoming units].
Baluja and Colton are in the same field of endeavor, graph processing. It would have been obvious to one of ordinary skill in the art at the time of the invention to combine Colton’s teaching of message passing used in a neural network, such as a perceptron, with the social networking graph processing taught by Baluja to achieve the predictable result of a large graph messaging system improved through implementation of machine learning. Baluja teaches the use of machine learning algorithms in weighting at paragraph [0058], and Colton teaches that Artificial Neural Networks are useful in statistics because as a representation system they are mathematical functions (Colton’s page 1, “ANN Representation” section 7.2).
The combination of Baluja and Colton does not explicitly teach: wherein, for a respective superstep of the series of synchronized supersteps at a respective vertex of the plurality of vertices, the respective superstep being an iteration that includes ordered stages of computation for each vertex, executing the label propagation algorithm comprises: saving, at a beginning of the respective superstep, a state of a partition of the graph data to persistent storage, wherein the partition of the graph data is associated with the respective vertex.
However, Dean teaches:
wherein, for a respective superstep of the series of synchronized supersteps at a respective vertex of the plurality of vertices, the respective superstep being an iteration that includes ordered stages of computation for each vertex [Dean, pages 139-140, Fig. 1 and § 3.1, describing the map and reduce sequence of actions that occur, which correspond to the synchronized supersteps], executing the label propagation algorithm comprises:
saving, at a beginning of the respective superstep, a state of a partition of the graph data to persistent storage, wherein the partition of the graph data is associated with the respective vertex [Dean, page 140, § 3.1, action numbers 1, 3 & 4, starts up many copies of the program on a cluster of machines, periodically writes buffered pairs to local disk].
Baluja, Colton, and Dean are in the same field of endeavor, graph processing. It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the combination of Baluja and Colton with the distributed data processing taught by Dean to achieve the predictable result of a large scale distributed data processing system that is applicable to any workload that is divisible into parallel tasks, including graph processing. Dean’s §§ 2.1 & 2.3 at pages 138-139 describe example implementations that include graph applications to demonstrate the predictability of this usage.
The combination of Baluja, Colton, and Dean teaches:
2. The method of claim 1, further comprising sending, by the data processing hardware, an outgoing message to a respective target vertex of the plurality of vertices, the outgoing message comprising the updated label value [Colton, pages 4 & 5, “Example” section spanning both pages, showing value of unit “S” being the sum of the weighted incoming units, showing output values of “S” based on the applied unit function].
3. The method of claim 2, wherein sending the outgoing message occurs during the respective superatep of the series of synchronized supersteps further comprising:
sending, by the data processing hardware, a message to a coordinating system indicating that the respective vertex has completed the respective superstep of the series of supersteps [Dean, pages 139-140, Fig. 1 and § 3.1, describing the map and reduce sequence of actions that occur, which correspond to the synchronized supersteps]; and
receiving, by the data processing hardware, a signal to begin a subsequent superstep of the series of synchronized supersteps [Dean, pages 139-140, Fig. 1 and § 3.1, describing the map and reduce sequence of actions that occur, which correspond to the synchronized supersteps].
4. The method of claim 2, further comprising:
maintaining, by the data processing hardware and using a message module, a message queue for the plurality of vertices [Baluja, ¶ 0083, “In some implementations, the number of the iterations is specified in advance. In other implementations, the algorithm terminates when the label values for the labels at each node reach a steady state (e.g., a state where the difference in the label value change between iterations is smaller than a specified epsilon).”]; and
determining, by the data processing hardware, that the message queue satisfies a threshold size [Baluja, ¶ 0083],
wherein sending the outgoing message to the respective target vertex comprises sending the outgoing message responsive to determining that the message queue satisfies the threshold size [Baluja, ¶ 0083].
5. The method of claim 1, wherein each respective edge further comprises a directed edge [Colton, page 5, top figure, input layer unit identified as “X”(n) and connects to a target unit, which forms an edge identifier through the connection].
6. The method of claim 5, wherein the directed edge defines a communication direction from a source vertex to a target vertex [Colton, page 5, top figure, input layer unit identified as “X”(n) and connects to a target unit, which forms an edge identifier through the connection].
7. The method of claim 1, wherein the graph represents at least one of: a social graph; a computer network topology; or transportation routes in a geographic map [Baluja, ¶ 0048, social graph].
8. The method of claim 1, wherein each respective edge further comprises a target vertex identifier [Colton, page 5, top figure, input layer unit identified as “X”(n) and connects to a target unit, which forms an edge identifier through the connection].
9. The method of claim 1, wherein the graph data representing the graph comprises one or more graph partitions [Colton, page 5, top figure, layer names above the horizontal bar at the top correspond to graph partitions].
10. The method of claim 9, wherein each respective graph partition is assigned to a corresponding worker system [Baluja, ¶ 0144].
Claims 11-20 recite limitations similar to those recited in claims 1-10, respectively, and are rejected for the same reasons discussed above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Scott A. Waldron whose telephone number is (571)272-5898. The examiner can normally be reached Monday - Friday 9:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Neveen Abel-Jalil can be reached at (571)270-0474. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Scott A. Waldron/Primary Examiner, Art Unit 2152 11/29/2025