Prosecution Insights
Last updated: April 17, 2026
Application No. 19/082,503

METHOD FOR MATCHING FAMILIES, FRIEND SETS, HOUSEHOLDS, NEIGHBORS, GROUPS AND COMMUNITIES FOR SOCIAL INTERACTIONS AND TRANSACTIONS

Non-Final OA §101§103
Filed
Mar 18, 2025
Examiner
BROWN, LUIS A
Art Unit
3626
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
unknown
OA Round
1 (Non-Final)
46%
Grant Probability
Moderate
1-2
OA Rounds
3y 9m
To Grant
77%
With Interview

Examiner Intelligence

Grants 46% of resolved cases
46%
Career Allow Rate
274 granted / 598 resolved
-6.2% vs TC avg
Strong +31% interview lift
Without
With
+31.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
35 currently pending
Career history
633
Total Applications
across all art units

Statute-Specific Performance

§101
31.8%
-8.2% vs TC avg
§103
41.2%
+1.2% vs TC avg
§102
9.6%
-30.4% vs TC avg
§112
13.9%
-26.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 598 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Status of Claims The following is a FIRST, NON-FINAL OFFICE ACTION for Application #19/082,503, filed on 03/18/2025. This application claims Priority to Provisional Application #63/567,266, filed on 03/19/2024. Claims 1-20 are pending and have been examined. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The rationale for this finding is explained below. Per Step 1 of the analysis, the claims are analyzed to determine if they are directed to statutory subject matter. Claim 1 claims a method, or process. A process is a statutory category for patentability. Per Step 2A, Prong 1 of the analysis, the examiner must now determine if the claims recite an abstract idea or eligible subject matter. In the instant case, the independent claims are directed towards an abstract idea. Specifically, independent claim 1 recites “identifying a plurality of data points containing demographic, preference, and other descriptive or relevant information to matching families, friend sets, households, neighbors, and communities for social interactions and transactions, assigning point weights to the plurality of data points, and calculating a matching score, classification, or alternative algorithmic output determining the similarities of two or more families, friend sets, households, neighbors, and communities. Therefore, the claims recite an abstract idea, namely “certain methods of organizing human activity.” Specifically, the claims recite “managing personal behavior or relationships or interactions between people including social activities” (see MPEP 2106.04 (a) (2) (II)). The claims optimize action plan schedules in such as an educational setting in which agents, such as faculty and staff, have meetings, interventions, and other plans in place, gaps are identified in the framework relative to student performance, and the optimization schedule framework is adjusted to meet the gaps by subsequent actions to be taken. An administrator, consultant, or other authority could analyze the available data, schedule, and educational data, identify gaps, and initiate adjustments to address the gaps. This often takes place in schools, universities, corporations, and businesses. The claims simply automate these practices using a computer. Therefore, the claims recite an abstract idea, namely “business relations” and “managing personal behavior- teaching.” The claims secondarily recite a mental process. An administrator, consultant, or other authority with access to the temporal data, gap data, and the schedule could analyze the available data, schedule, and educational data, identify gaps, and make determinations or decisions to initiate adjustments to address the gaps. This often takes place in schools, universities, corporations, and businesses. The claims simply automate these practices using a computer. Therefore, the claims secondarily recite a mental process. Per Step 2A, Prong 2 of the analysis, the examiner must now determine if the claims integrate the abstract idea into a practical application. The additional elements of the independent claims include “a computer memory device,” and a “processor.” However, these additional elements are considered generic recitations of a technical element and are recited at a high level of generality. These additional elements are being used as “tools to automate the abstract idea” (see MPEP 2106.05 (f)) and are not recitations of a special purpose computer or transformation (see MPEP 2106.05 (b) and (c)). Therefore, these additional elements are not considered to integrate the abstract idea into a practical application. The additional elements also include the actual “storing on a computer memory device, a plurality of data points….” This limitation is considered a generic recitation of a technical element. Further, “storing and retrieving information in a memory” is listed in the MPEP 2106.05 (d) (II) (iv) as an example of conventional computer functioning (see Versata Dev Grp vs SAP). Therefore, these additional elements are not considered to integrate the abstract idea into a practical application. Per Step 2B of the analysis, the examiner must now determine if the claims include limitations that are “significantly more” than the abstract idea by demonstrating an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of an abstract idea to a particular technological environment. The additional elements of the independent claims include “a computer memory device,” and a “processor.” However, these additional elements are considered generic recitations of a technical element and are recited at a high level of generality. These additional elements are being used as “tools to automate the abstract idea” (see MPEP 2106.05 (f)) and are not recitations of a special purpose computer or transformation (see MPEP 2106.05 (b) and (c)). Therefore, these additional elements are not considered significantly more than the abstract idea itself. The additional elements also include the actual “storing on a computer memory device, a plurality of data points….” This limitation is considered a generic recitation of a technical element. Further, “storing and retrieving information in a memory” is listed in the MPEP 2106.05 (d) (II) (iv) as an example of conventional computer functioning (see Versata Dev Grp vs SAP). Therefore, these additional elements are not considered significantly more than the abstract idea itself. When considered as an ordered combination, the claim is still considered to be directed to an abstract idea as the claim steps in the ordered combination simply recite the logical steps for analyzing data points, assigning data point weights, and calculating a matching score, classification, or algorithmic output. Therefore, the ordered combination does not lead to a determination of significantly more. When considering the dependent claims, claims 2, 5, and 8 are considered part of the abstract idea, as the claim further limits what is still a calculating step, which is part of managing of relationships or a mental process. Claim 3 is considered part of the abstract idea as the filtering is considered part of the managing of the social relationships when doing data analysis or also part of the mental process. Claim 4 is considered conventional computer functioning, as the MPEP 2106.05 (d) (II) (i) includes “receiving or transmitting data over a network” as an example of conventional computer functioning (see Symantec). Claim 6 is considered the equivalent of “apply it,” or using a computer as a tool to automate the abstract idea (see MPEP 2106.05 (f)) and is not recitations of a special purpose computer or transformation (see MPEP 2106.05 (b) and (c)). The use of machine learning is recited at a high level of generality with no detail as to the training, use, or technical application of the machine learning. Therefore, this additional element is not considered to integrate the abstract idea into a practical application and is not considered significantly more. Claim 7 is considered part of the abstract idea as adjusting point weights is considered part of the management of personal interactions and of the analysis and determination that is part of the mental process. Claim 9 as written is considered part of the abstract idea, as the results being “presented” recite no technology, and presentation can be done as part of the abstract idea. Therefore, claims 1-9 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. See Alice Corporation Pty. Ltd. Vs. CLS Bank International et al., 2014 (please reference link to updated publicly available Alice memo at http://www.uspto.gov/patents/announce/alice_pec_25jun2014.pdf as well as the USPTO January 2019 Updated Patent Eligibility Guidance.) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-7 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Carbonell, et al., Pre-Grant Publication No. 2014/0222806 A1 in view of Donnelly, et al., Pre-Grant Publication No. 2017/0293878 A1. Regarding Claim 1, Carbonell teaches: A computer-implemented method for matching families, friend sets, households, neighbors, and communities for social interactions and/or transactions, comprising: storing, on a computer memory device, a plurality of datapoints containing preference, and other descriptive or relevant information to matching families, friend sets, households for social interactions and transactions (see [0005], [0021], and [0025] in which a database is maintained that stores profiles of social network users including interests, preferences, and other datapoints; see also [0023] in which the attributes in the profile can also include demographic such as age and gender along with interests and other relevant information) assigning point weights to the plurality of datapoints (see [0008] which teaches weighted preferences of users/profiles prior to calculating a match score; see also [0028]-[0032] in which point weights are assigned to the various attributes of a user profile and these are used in calculating match scores between users) calculating, by a processor, a matching score, classification, or alternative algorithmic output determining the similarities of two or more families, friend sets, households (see Abstract, [0005], [0008], and [0038]-[0042], and [0051]-[0052] in which matching scores are calculated for user profiles and used as a measure of similarity) **The examiner notes that the applicant’s filed specification includes a description in [0015] as the “groups, families” being optionally such as one single person, so the matching similarity scores in these claims could also apply to a single person; further it is known in the social networking arts that a “user,” a “node,” or a “profile” can include a group, a community, a family, or other single or multiple people that could be associated with a user profile. Therefore, Carbonell is determined to read on “friend sets, families, households.”** Carbonell, however, does not appear to specify: neighbors, communities Donnelly teaches: neighbors, communities (see Abstract, Figures 3-4b, [0052], [0056], and [0071]-[0073] in which matching attribute scores are used to match senior communities and neighbor senior communities to seniors based on demographics, geographic location, and other attribute factors) It would have been obvious to one of ordinary skill in the art at the time of the filing of the application to combine Donnelly with Carbonell because Carbonell already teaches other user groups, families, households for matching similarity scores in such as a social network, and it is known in the art that nodes, users, and profiles could be associated with communities and neighbors, and communities and neighbors allow for geographically close entities to be associated or suggested for association, which is attractive because social interaction is easier when geographically close. Regarding Claim 2, the combination of Carbonell and Donnelly teaches: the method of claim 1 Carbonell further teaches: wherein calculating by the processor entails assigning point weights to the plurality of datapoints and calculating, by a processor, a matching score (see [0008] which teaches weighted preferences of users/profiles prior to calculating a match score; see also [0028]-[0032] in which point weights are assigned to the various attributes of a user profile and these are used in calculating match scores between users; see also Abstract, [0005], [0008], and [0038]-[0042], and [0051]-[0052] in which matching scores are calculated for user profiles and used as a measure of similarity), mathematical algorithms (see [0035] and [0044]) Regarding Claim 3, the combination of Carbonell and Donnelly teaches: the method of claim 1 Donnelly further teaches: a filtering step according to user- defined filtering parameters (see Abstract and [0111] in which filtering criteria defined by such as the community or a user is used to filter the datapoints and the recommendations) It would have been obvious to one of ordinary skill in the art at the time of the filing of the application to combine Donnelly with Carbonell because Carbonell already teaches matching similarity scores in such as a social network, and using user filter criteria would help identify similar matches for recommendation based not only on the match but on filter criteria that the user has indicated is important to them. Regarding Claim 4, the combination of Carbonell and Donnelly teaches: the method of claim 1 Carbonell further teaches: displaying a match based on the matching score (see [0004]-[0005], [0008], and [0055] in which the matched recommended users are displayed) Regarding Claim 5, the combination of Carbonell and Donnelly teaches: the method of claim 1 Carbonell further teaches: wherein the calculating step includes determining indirect commonalities between families, friend sets, households, neighbors, and communities by assigning point values to related but distinct interests and attributes (see [0035] in which similar but distinct interests, such as one user having a goal of speaking French and one user being a fluent French speaker, are used by the matching algorithm to match users; see also [0053] in which matches are selected based on a higher level category of interest that both interests match, even if the direct interests of the users do not match) Regarding Claim 6, the combination of Carbonell and Donnelly teaches: the method of claim 2 Carbonell further teaches: wherein machine learning techniques, including but not limited to clustering algorithms, association rule learning, and neural networks, are employed to identify non-obvious interest correlations between families, friend sets, households, and communities (see [0035]-[0049] in which vector similarities, k-vector similarity techniques, coefficient matrices, and other machine learning algorithmic techniques are used to identify direct and indirect non-obvious matches as in [0035], [0046], and [0053]) Regarding Claim 7, the combination of Carbonell and Donnelly teaches: the method of claim 1 Carbonell further teaches: wherein the point weights assigned to the plurality of datapoints are dynamically adjusted based on the context of the match (see [0049]-[0051] in which values change over time and the scores can be dynamically calculated as criteria, interaction, and other datapoints change over time) such that different weightings are applied for social interactions versus other interactions Donnelly further teaches: different weightings are applied for transactional interactions (see Abstract, [0008], [0046]-[0050] in which needs and services are datapoints that match scores are assigned to) **The examiner notes that the applicant’s filed specification describes “transactions” not as purchases or financial exchanges but as “needs or wants” of a user.** Regarding Claim 9, the combination of Carbonell and Donnelly teaches: the method of claim 1 Carbonell further teaches: wherein the results of the matching process are presented in one or more alternative formats, including but not limited to ranked lists (see [0037] in which the recommendations based on match scores are presented to the user as a ranked list) Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Carbonell, et al., Pre-Grant Publication No. 2014/0222806 A1 in view of Donnelly, et al., Pre-Grant Publication No. 2017/0293878 A1 and in further view of Shivakumar, Pre-Grant Publication No. 2014/0289261 A1. Regarding Claim 8, the combination of Carbonell and Donnelly teaches: the method of claim 1 Carbonell and Donnelly, however, does not appear to specify: calculating a trust score for each family, friend set, household, or community entity based on prior interactions, social network connections, and verified transaction history Shivakumar teaches: calculating a trust score for each family, friend set, household, or community entity based on prior interactions, social network connections, and verified transaction history (see Abstract, [0005], and [0023]-[0028]) It would have been obvious to one of ordinary skill in the art at the time of the filing of the application to combine Shivakumar with Carbonell and Donnelly because Carbonell already teaches scores based on a degree of similarity, and using a trust score would allow for an indication of reliability based on other factors that show trust in the authenticity and interaction of the user rather than only profile type datapoints. Conclusion The following prior art reference was not relied upon in this office action but is considered pertinent to the applicant’s invention: Bruich, et al., Pre-Grant Publication No. 2017/0220693 A1- teaches using match scores based on attribute weights to match users to groups in a social network; this is done using machine learning algorithms Ju, et al., Pre-Grant Publication No. 2016/0134576 A1- teaches using match scores based on attribute weights to match users to groups in a social network; this is done using machine learning algorithms Hsu, William H., et al. "Collaborative and Structural Recommendation of Friends using Weblog-based Social Network Analysis." AAAI Spring Symposium: Computational Approaches to Analyzing Weblogs. Vol. 6. 2006. Han, Xiao, et al. "CSD: A multi-user similarity metric for community recommendation in online social networks." Expert Systems with Applications 53 (2016): 14-26. Any inquiry of a general nature or relating to the status of this application or concerning this communication or earlier communications from the Examiner should be directed to Luis A. Brown whose telephone number is 571.270.1394. The Examiner can normally be reached on Monday-Friday 8:30am-5:00pm EST. If attempts to reach the examiner by telephone are unsuccessful, the Examiner’s supervisor, JESSICA LEMIEUX can be reached at 571.270.3445. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://portal.uspto.gov/external/portal/pair . Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866.217.9197 (toll-free). Any response to this action should be mailed to: Commissioner of Patents and Trademarks Washington, D.C. 20231 or faxed to 571-273-8300. Hand delivered responses should be brought to the United States Patent and Trademark Office Customer Service Window: Randolph Building 401 Dulany Street Alexandria, VA 22314. /LUIS A BROWN/Primary Examiner, Art Unit 3626
Read full office action

Prosecution Timeline

Mar 18, 2025
Application Filed
Jan 15, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572948
PREDICTIVE MAINTENANCE SYSTEM FOR BUILDING EQUIPMENT WITH RELIABILITY MODELING BASED ON NATURAL LANGUAGE PROCESSING OF WARRANTY CLAIM DATA
2y 5m to grant Granted Mar 10, 2026
Patent 12542203
DISTRIBUTED COMPUTER SYSTEM FOR COORDINATING MESSAGING AND FUNDING FOR HEALTHCARE EXPENSES INCLUDING FUNDING VIA NETWORKED CROWDSOURCING
2y 5m to grant Granted Feb 03, 2026
Patent 12536504
COLLABORATIVE WORKSPACES FOR BROWSERS
2y 5m to grant Granted Jan 27, 2026
Patent 12480674
HVAC EQUIPMENT HEALTH CHECK AFTER WEATHER EVENT
2y 5m to grant Granted Nov 25, 2025
Patent 12469013
METHOD TO MANAGE CABLE TV LEAKAGE WHEN LEAKS ARE NO LONGER DETECTABLE
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
46%
Grant Probability
77%
With Interview (+31.0%)
3y 9m
Median Time to Grant
Low
PTA Risk
Based on 598 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month