Prosecution Insights
Last updated: April 19, 2026
Application No. 18/674,732

COLLABORATIVE COMPONENTS FRAMEWORK FOR CONTENT-BASED RECOMMENDATIONS SYSTEM

Non-Final OA §101§103
Filed
May 24, 2024
Examiner
VIG, NARESH
Art Unit
3622
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Intuit Inc.
OA Round
3 (Non-Final)
37%
Grant Probability
At Risk
3-4
OA Rounds
4y 2m
To Grant
80%
With Interview

Examiner Intelligence

Grants only 37% of cases
37%
Career Allow Rate
223 granted / 607 resolved
-15.3% vs TC avg
Strong +44% interview lift
Without
With
+43.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
47 currently pending
Career history
654
Total Applications
across all art units

Statute-Specific Performance

§101
29.4%
-10.6% vs TC avg
§103
43.9%
+3.9% vs TC avg
§102
2.6%
-37.4% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 607 resolved cases

Office Action

§101 §103
DETAILED ACTION This is in reference to communication received 08 January 2026. Claims 1 – 20 are pending for examination. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 – 20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Specifically, claims 1 – 20 are directed toward at least one judicial exception without significantly more. In accordance with the Federal Register Notice: 2019 Revised Patent Subject Matter Eligibility Guidance, (January 7, 2019), (accessible at https://www.govinfo.gov/content/pkg/FR-2019-01-07/pdf/2018-28282.pdf), the rationale for this determination is explained below: Claim 11, representative of claim 1 is directed towards a method, which is a statutory category of invention. Although, claim 1 is directed toward a statutory category of invention, the claim appears to be directed toward a judicial exception namely an abstract idea. Claim 1 recites invention directed to generating a knowledge-graph using stored text-based content-data associated with a campaign, adjusting (increasing or decreasing) the weights based upon on consumption by a common collaborative group. new text-based campaign content is processed by modifying text through keyword emphasis and minimization by duplicating keywords for emphasis and removing keywords for deemphasis to generate modified text, recalculate text similarity scores using the modified text, reconstruct the knowledge graph with updated edge weights to identify a closest customer group based on the reconstructed knowledge graph, subsequent to which targeted delivery of the new text-based campaign (not the modified campaign) to customers associated with that group; engagement metrics of the delivered campaign is analyzed and text modification parameters are adjust to optimize future modifications, as drafted, is a process that, under its broadest reasonable interpretation covers performance of organizing certain methods of human activity related to advertising, marketing or sales activities or behaviors. Next, the aforementioned claims recite additional functional elements of using natural language processing for determining content similarity, as drafted, is a process that, under its broadest reasonable interpretation covers performance of organizing certain methods of human activity related to advertising, marketing or sales activities or behaviors. Represented claim 1, which does recite statutory categories (machine, product of manufacture, for example), the same analysis as above applies to these claims since the method steps are the same. However, the judicial exception is not integrated into a practical application. These claims add the generic computer components (additional elements) of a system comprising one or more hardware processors and a memory. The processor, memory, and non-transitory machine-readable medium are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of the processor, memory, and non-transitory machine-readable medium amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible. When taken as an ordered combination, nothing is added that is not already present when the elements are taken individually. When viewed as a whole, the marketing activities amount to instructions applied using generic computer components. As for dependent claims 2 – 10 and 12 – 20, these claims are dependent on the aforementioned independent claims, and include all the limitations contained therein. These claims do not recite any additional technical elements, and simply disclose additional limitations that further limit the abstract idea with details regarding descriptions of various data, what technology can be used, identifying similarities in the data records, identifying click-through rates, count of sharing of the content, and considering time-decay factor to assign weights (e.g., multipliers). Thus, the dependent claims merely provide additional non-structural (and predominantly non-functional) details that fail to meaningfully limit the claims or the abstract idea(s). Therefore, claims 1-20 are not drawn to eligible subject matter, as they are directed to an abstract idea without significantly more. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 – 20 are rejected under 35 USC 103 as being unpatentable over Jeong et al. US Publication 2021/0089598 in view of Gizem Unal published article “Use knowledge graphs to understand your customers”, Ken Peluso published article “What Is the Google Knowledge Graph & How Does It Work?” hereinafter referred to as Peluso and Kurtis Pykes published article “Stemming and Lemmatization in Python” hereinafter referred to as Pykes. Regarding claim 11 and representative claim 1, Jeong teaches system and method for providing content-based recommendations (Jeong, when the context data indicates a context associated with baseball, the device 130 may identify that an entity indicating baseball and an entity indicating baseball news are associated in the server knowledge graph 120. The device 130 may determine baseball news as the first recommended content.) [Jeong, 0135], comprising: a server including a processor (Jeong, a processor configured to execute the one or more instructions) [Jeong, 0016] Jeong does not explicitly teach knowledge graph related to campaign. However, Unal teaches “As a marketing campaign & automation manager, I spend a lot of time analyzing data from multiple sources to find out what attracts people to our website. Whether it’s a tweet, an email or a Google advert, those who kindly give their consent for tracking marketing preferences help us understand what’s performing well” [Unal, page 1]. Unal further recites “Our marketing strategies use multiple channels to reach more than one type of target audience. When we zoom into the chart we can see which channels complement each other.” [Unal, page 7]. In addition Unal recites using knowledge graph visualization to focus on which marketing channels are most effective at reaching our audience and attracting them to our website. [Unal, page 3] Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to modify Jeong by adopting teachings of Unal to find out how your customers engage with your brand, or who they’re doing business with? How do their relationships evolve?. Jeong in view of Unal teaches system and method further comprising: a database comprising campaign content data from text-based campaigns (Unal, In our dataset, there are multiple URLs for the same webpage because of essential redirected pages or additional codes for tracking web traffic. We removed those tracking parameters and grouped the same URLs together.) [Unal, page 4]; and a server including a processor (Jeong, a processor configured to execute the one or more instructions) [Jeong, 0016] configured to: constructing a knowledge graph from text-based campaign content data received via electronic text-based content delivery (Jeong, the device 130 may generate knowledge data from log history information. For example, the device 130 may generate knowledge data by inputting log history information to a knowledge graph generation model. In an embodiment of the disclosure, the knowledge graph generation model may refer to a data generation model that, when data such as log history information is input, processes the input data and outputs knowledge data for generating the knowledge graph. Using the knowledge graph generation model, the device 130 may obtain knowledge data expressed in a text form from the log history information) [Jeong, 0095], Jeong in view of Unal does not explicitly teach using Natural Language Processing (aka.NLP). However, Peluso teaches Google Knowledge Graph is an intelligent model that understands facts about people, places, and things and how these entities are all connected. This tool leverages semantic search techniques, processing natural language queries to offer a concise summary of the subject from various sources. With detailed analysis of user’s search patterns, Google Knowledge Graph improves the relevance of search results, tailoring information to the user’s needs and interests [Peluso, page 3]. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to modify Jeong in view of Unal by adopting teachings of Peluso to present information in an easy-to-understand format, and provide more relevant and accurate results. Jeong in view of Unal and Peluso teaches system and method further comprising: constructing a knowledge graph from text-based campaign content data received via electronic text-based content delivery (as responded to above) [Jeong, 0095], wherein the knowledge graph is constructed to include nodes representing individual text-based campaigns and edge weights representing content similarity determined using natural language processing and collaborative consumption between text-based campaigns and user interaction data received via electronic communication (as responded to above) [Peluso, page 3]; adjusting the edge weights within the knowledge graph based on the collaborative consumption of customer groups, by increasing edge weights for campaign node pairs consumed by a common collaborative group and decreasing edge weights for campaign node pairs consumed by different groups, to emphasize common keywords belonging to common customer groups and deemphasize keywords belonging to different customer groups (Jeong, Different weights may be assigned to the first recommendation content and the second recommendation content through the ranking algorithm. …. a weight higher than that of the first recommended content may be assigned to the second recommended content determined based on the pattern knowledge graph reflecting the behavior pattern of the user, among the first recommended content determined based on the server knowledge graph 120 and the second recommended content determined based on the updated device knowledge graph.) [Jeong, 0140, 0141]; and processing new text-based campaign content by modifying text through keyword emphasis and minimization (Jeong, a weight higher than that of the first recommended content may be assigned to the second recommended content determined based on the pattern knowledge graph reflecting the behavior pattern of the user, among the first recommended content determined based on the server knowledge graph 120 and the second recommended content determined based on the updated device knowledge graph) [Jeong, 0141]; Jeong in view of Unal and Peluso does not teach duplicating or removing keywords to generate modified texts. However, Pykes teaches Stemming is a technique used to reduce an inflected word down to its word stem down to the common word which can be used as a synonym for a keyword (e.g., making the word as plural is similar to duplicating the keyword, and using the synonym is similar to removing keyword) [Pykes, page 3] and use Lemmatization to accurately determine the intended part-of-speech and meaning of the word based on its context [Pykes, page 4]. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to modify Jeong in view of Unal and Peluso by adopting teachings of Pykes to improve model performance, group similar words, analyze, compare and understand texts Jeong in view of Unal, Peluso and Pykes teaches system and method further comprising: processing new text-based campaign content by modifying text through keyword emphasis and minimization by duplicating keywords for emphasis and removing keywords for deemphasis to generate modified text, recalculate text similarity scores using the modified text, reconstruct the knowledge graph with updated edge weights, and identify a closest customer group based on the reconstructed knowledge graph (as responded to above) [Pykes, page 3, 4, Fig. 5C and associated disclosure]; perform targeted delivery of the new text-based campaign without the duplicated and removed keywords to customers associated with that group via electronic text-based content delivery (Unal, contact click on a link from an email we’ve sent them) [Unal, page 4]; analyzing engagement metrics of delivered campaigns (Unal, We’re interested in analyzing website activity, so we filtered out everything else) [Unal, page 4]; and adjust text modification parameters using reinforcement learning to optimize future modifications (Jeong, The server may update the server knowledge graph periodically or a periodically and may transmit the updated server knowledge graph to the device. The device may update the device knowledge graph based on the server knowledge graph received from the server and the log history information about the device.) [Jeong, 0065, 0246]. Regarding claim 12 and representative claim 2, as combined and under the same rationale as above, Jeong in view of Unal, Peluso and Pykes teaches system and method, further comprising the step of collecting the text-based campaign content data from a plurality of sources, including social media platforms, email campaigns, and web advertisements (Unal, We’re also only interested in analyzing website activity, so we filtered out everything else. This means we can focus on six marketing channels: Organic: contacts find us through online search engine results. Email: contacts click on a link from an email we’ve sent them. Paid: contacts click on an online advert. Referral: contacts visit our pages via a third party site, maybe through a link on the site of one of our Technology Alliance partners. Direct: contacts reach us without going through a referral, perhaps by typing our web address straight into their web browser. Social: contacts click on a link from Twitter, Google My Business, LinkedIn, etc.) [Unal, page 4]. Regarding claim 13 and representative claim 3, as combined and under the same rationale as above, Jeong in view of Unal, Peluso and Pykes teaches system and method, further comprising the step of representing each text-based campaign as a node within the knowledge graph based on the campaign content data [Jeong, Fig. 11, and associated disclosure]. Regarding claim 14 and representative claim 4, as combined and under the same rationale as above, Jeong in view of Unal, Peluso and Pykes teaches system and method, further comprising the step of applying natural language processing techniques to determine content similarity scores between pairs of campaign nodes (Jeong, device 130 may collect data from structured sources or unstructured sources so as to obtain knowledge data associated with the generation of the knowledge graph. In an embodiment of the disclosure, the structured sources may include relational databases, feeds, catalogs, directories, and the like. Also, in an embodiment of the disclosure, the unstructured sources may include web pages, texts, speeches, images, videos, and the like.) [Jeong, 0074]. Regarding claim 16 and representative claim 6, as combined and under the same rationale as above, Jeong in view of Unal, Peluso and Pykes teaches system and method, further comprising the step of increasing the edge weights for pairs of campaign nodes that are consumed by a collaborative group of content consumers, thereby enhancing the content similarity within the collaborative group (Jeong, a weight higher than that of the first recommended content may be assigned to the second recommended content determined based on the pattern knowledge graph reflecting the behavior pattern of the user, among the first recommended content determined based on the server knowledge graph 120 and the second recommended content determined based on the updated device knowledge graph) [Jeong, 0141]. Regarding claim 18 and representative claim 8, as combined and under the same rationale as above, Jeong in view of Unal, Peluso and Pykes teaches system and method, further comprising the step of utilizing the user interaction data to identify the common customer groups, such data including but not limited to click-through rates, content consumption time, and sharing metrics (Unal, We’re also only interested in analyzing website activity, so we filtered out everything else. This means we can focus on six marketing channels: Organic: contacts find us through online search engine results. Email: contacts click on a link from an email we’ve sent them. Paid: contacts click on an online advert. Referral: contacts visit our pages via a third party site, maybe through a link on the site of one of our Technology Alliance partners. Direct: contacts reach us without going through a referral, perhaps by typing our web address straight into their web browser. Social: contacts click on a link from Twitter, Google My Business, LinkedIn, etc.) [Unal, page 4]. Regarding claim 19 and representative claim 9, as combined and under the same rationale as above, Jeong in view of Unal, Peluso and Pykes teaches system and method, further comprising the step of dynamically adjusting the edge weights in response to changes in the user interaction data, ensuring that the knowledge graph reflects current consumption patterns (Jeong, a weight higher than that of the first recommended content may be assigned to the second recommended content determined based on the pattern knowledge graph reflecting the behavior pattern of the user, among the first recommended content determined based on the server knowledge graph 120 and the second recommended content determined based on the updated device knowledge graph. Official-Notice is taken that, at the time of invention, it would have been obvious to one of ordinary skill in the art to decrease weight of a content resulting in demoting that content in relation to the other content, in lieu of increasing weight of other content) [Jeong, 0141]. Claims 5 and 15 are rejected under 35 USC 103 as being unpatentable over Jeong et al. US Publication 2021/0089598 in view of Gizem Unal published article “Use knowledge graphs to understand your customers”, Ken Peluso published article “What Is the Google Knowledge Graph & How Does It Work?” hereinafter referred to as Peluso, Kurtis Pykes published article “Stemming and Lemmatization in Python” hereinafter referred to as Pykes and Aajanki published article of Github.io “Sentence embedding models” Regarding claim 15 and representative claim 5, Jeong in view of Unal, Peluso and Pykes does not teach using of sentence embedding model. However, Aajanki teaches A crucial component in most natural language processing (NLP) applications is finding an expressive representation for text. Modern methods are typically based on sentence embeddings that map a sentence onto a numerical vector [Aajanki, page 1]. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to modify Jeong in view of Unal, Peluso and Pykes by adopting teachings of Aajanki to build applications that react more naturally to the sentiment and topic of written text. as combined and under the same rationale as above, Jeong in view of Unal, Peluso, Pykes and Aajanki teaches system and method, wherein the natural language processing techniques include the step of using sentence embedding models to determine the content similarity scores between the campaign nodes (Aajanki, A crucial component in most natural language processing (NLP) applications is finding an expressive representation for text. Modern methods are typically based on sentence embeddings that map a sentence onto a numerical vector …. o build applications that react more naturally to the sentiment and topic of written text.) [aajanki, page 1]. Claims 7, 10 and 17 and 20 are rejected under 35 USC 103 as being unpatentable over Jeong et al. US Publication 2021/0089598 in view of Gizem Unal published article “Use knowledge graphs to understand your customers”, Ken Peluso published article “What Is the Google Knowledge Graph & How Does It Work?” hereinafter referred to as Peluso, Kurtis Pykes published article “Stemming and Lemmatization in Python” hereinafter referred to as Pykes and Masayuki Karasuyama published article “Adaptive edge weighting for graph-based learning algorithms” hereinafter referred to as Karasuyama. Regarding claim 17 and representative claim 7, Jeong in view of Unal, Peluso and Pykes does not teach decreasing of edge weight. However, Karusuyama teaches optimizes edge weights through a local linear reconstruction error minimization under a constraint that edges are parameterized by a similarity function of node pairs. As a result our generated graph can capture the manifold structure of the input data, where each edge represents similarity of each node pair [Karusuyama, page 308]. Karusuyama further recites that the AEW only optimizes weights of edges and redundant weights can be reduced drastically, because in the Gaussian Kernel, weights decay exponentially according to the squared distance [Karusuyama, page 320]. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to modify Jeong in view of Unal, Peluso and Pykes by adopting teachings of Karusuyama to implement real time updates to knowledge graphs with weighted edges to identify relevant information in dynamic environment. as combined and under the same rationale as above, Jeong in view of Unal, Peluso, Pykes and Karusuyama teaches system and method further comprising the step of decreasing the edge weights for pairs of campaign nodes that are not consumed by common collaborative group of content consumers, thereby reducing the content similarity across different collaborative groups [Karusuyama, page 308, 320] Regarding claim 17 and representative claim 7, as combined and under the same rationale as above, Jeong in view of Unal, Peluso, Pykes and Karusuyama teaches system and method further comprising the step of applying a decay factor to the edge weights over time, to account for an evolving interests of the common customer groups and maintain a relevance of the knowledge graph (Karusuyama, AEW only optimizes weights of edges and redundant weights can be reduced drastically, because in the Gaussian Kernel, weights decay exponentially according to the squared distance [Karusuyama, page 320]. Response to Arguments Applicant's arguments filed 08 January 2026 have been fully considered. However, while performing an updated search, a new reference was found and cited in this office action. Applicant is arguing added limitations are responded in Rejection under 35 USC 101 section and Rejection under 35 USC 103 section above. Therefore, applicant’s arguments are moot under new grounds of rejection. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Naresh Vig whose telephone number is (571)272-6810. The examiner can normally be reached Mon-Fri 06:30a - 04:00p. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ilana Spar can be reached at 571.270.7537. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NARESH VIG/Primary Examiner, Art Unit 3622 March 30, 2026
Read full office action

Prosecution Timeline

May 24, 2024
Application Filed
Jun 11, 2025
Non-Final Rejection — §101, §103
Sep 10, 2025
Response Filed
Oct 06, 2025
Final Rejection — §101, §103
Jan 08, 2026
Request for Continued Examination
Feb 13, 2026
Response after Non-Final Action
Mar 30, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12346935
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER READABLE RECORDING MEDIUM FOR PROVIDING INFORMATION TO A PASSENGER
2y 5m to grant Granted Jul 01, 2025
Patent 12314966
Providing Wireless Network Access
2y 5m to grant Granted May 27, 2025
Patent 12282936
OMNI-CHANNEL DIGITAL COUPON CLIPPING AND REDEMPTION
2y 5m to grant Granted Apr 22, 2025
Patent 12277580
METHODS AND SYSTEMS FOR PERSONALIZING VISITOR EXPERIENCE, ENCOURAGING PHILANTHROPIC ACTIVITY AND SOCIAL NETWORKING
2y 5m to grant Granted Apr 15, 2025
Patent 12254494
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER READABLE RECORDING MEDIUM FOR PROVIDING INFORMATION TO A PASSENGER
2y 5m to grant Granted Mar 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
37%
Grant Probability
80%
With Interview (+43.8%)
4y 2m
Median Time to Grant
High
PTA Risk
Based on 607 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month