DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 19 December 2025 has been entered.
Status of Claims
This is a non-final office action in response to the request for continued examination filed 19 December 2025. Claim 20 has been amended. Claims 1-20 remain pending and have been examined.
Response to Amendment
Applicant’s amendment to claim 20 has been entered.
Applicant’s amendment is insufficient to overcome the pending 35 U.S.C. 101 rejection. The rejection remains pending and is updated below, as necessitated by amendment.
Response to Arguments
Applicant’s arguments regarding the 35 U.S.C. 101 rejection have been fully considered, but are not persuasive. Applicant asserts that the claims are patent eligible because they are directed to “a specific, computer-focused improvement in how large-scale, high dimensional impure datasets are transformed into machine-usable clean datasets using constraint-driven global distributions, automated scoring and normalization, defragmentation across disparate customer profiles, and cross-system deployment with quantifiable fidelity guarantees.” See Response at page 11. Applicant asserts that the claims are not directed to an abstract idea because “the claims are directed to a concrete technical pipeline with defined data structures and algorithmic steps, not a desired result.” See response at page 12. Applicant additionally states that the claims limitations are akin to Enfish and (improved data structure), McRO (specific rules applied by a computer to generate results not performed mentally), and Finjan (novel data structures improving computer operations). Applicant further asserts that “as in DDR Holdings and Ancora, the claims recite a specific computer-implemented architecture and workflow that improves the functioning of the data pipeline itself… thereby integrating any alleged abstract idea into a practical application.” See Response at page 13. And further, that under Step 2B, the independent claims recite significantly more because the additional elements supply an inventive concept analogous to BASCOM (non-conventional and non-generic arrangement of known components that improves computer functionality) and Amdocs (distributed, unconventional architecture yielding improved performance). See Response at page 14. Examiner respectfully disagrees.
The claimed steps are reasonably construed as falling with in both the mental processes, and fundamental economic activities subgrouping of organizing human activities because a data analyst and promotional sales manager could analyze consumer data in relation to various business data sets to determine what digital content most aligns with a customer profile for the business purpose of improving customer relationship management and enterprise sale through personalized marketing. The claimed subject matter involves managing and analyzing customer profile and sales data per para. [0055-0058] of the Specification: online-transactional data and/or in-store transactional data … clean data computing device 102 may determine, for each of the set of customer profiles, a week-to-week, during the three-year time period, total purchase amount related to food items). The focus of the claims as a whole is on managing the personal behavior and commercial interactions of people (e.g., consumers, advertisers, retailers, and product sponsors) for advertising, marketing, and sales activities and behaviors, which fall within the certain methods of organizing human activity grouping of abstract ideas. Therefore, the claims recite an abstract idea.
While the claims include limitations for cleansing a dataset to generate a data structure for personalized consumer digital content, the limitations of independent claims 1, 11, and 20 for determining a portion of the customer profile that corresponds to the global distribution, comparing the portion of the customer profile data of the customer with the global distribution, generating one of the plurality of values for a score and associating the score with the corresponding plurality of constraints, implementing operations that generate an overall score, associating the oval score with a customer profile of the customer, implementing operations that automatically generate a clean dataset, extracting insights from the clean dataset by identifying one of more features in the constraint data, implementing a set of operations associated with the particular of the e-commerce entity…, including generating a personalized digital content campaign, including the amended limitations of independent claim 20 for performing defragmenting operations that include normalizing the scores and selecting a subset of the plurality of customers that satisfies a constraint threshold, and implementing a verification process that determines accuracy of the clean dataset, when considered individually or in combination amount to data processing and applying business rules for filtering data and generating datasets for providing personalized digital content to a user, without significantly more.
While the data processing steps narrow how the clean dataset is generated and validated before extracting insights for digital content campaign generation, filtering, cleansing, and validating a dataset for processing is a form of data management that does not improve the functioning of a computer or another technology. The claim limitations are not directed to an improvement to the recited processors/computing systems and memory. These additional elements are broadly and generically claims as tools used to implement to data processing and output steps. The Specification at [para. 0091] states: “… the de-fragmentation operations remove or lessen the effects of fragmentation-related impurities (e.g., lessen the chance fragmented customer profiles of one or more customers are included in the clean dataset 321). In some examples, executed defragmentation engine 305 may implement the defragmentation operations that include normalizing the scores of each constraint associated with each of the set of customer profiles. In some instances, normalizing the scores may include normalizing the size of each discrete bucket of the global distributions of each constraint. For instance, based on the constraint data 316, executed defragmentation engine 305 may determine the actual distribution of the set of customers that have the associated score, and then normalize the score of a particular customer utilizing the determined actual distribution (e.g., dividing the score of the customer with the actual distribution).” As described in the specification the defragmentation and normalizing functions are performed on the “obtained” data as data processing steps, not as steps that improve the functioning of the computer, processor, or memory. Reducing the size of a dataset using constraints, rules, and other data associations is part of the recited abstract idea and does not confer subject matter eligibility.
Applicant’s assertation that the claim limitations are analogous to those of McRO, Enfish, DDR, Ancora, and BASCOM, is not persuasive. Unlike the claimed invention in McRO, Inc. v. Bandai Namco Games America, Inc., 837 F.3d 1299 (Fed. Cir. 2016) that improved how the physical display operated to produce better quality images, the claimed invention here processes customer data to generate a personalized digital content campaign using mental processes (determining, comparing, associating, scoring, defragmenting, normalizing, selecting, verifying and extracting) and mathematical concepts (scoring) using a generic computing system as a data processing tool. This generic computer implementation is not only directed to mental processes, but also does not improve a display mechanism as was the case in McRO. The claimed invention does not improve a computer or its components’ functionality or efficiency, or otherwise changes the way those devices function, at least in the sense contemplated by the Federal Circuit in Enfish LLC v. Microsoft Corporation, 822 F.3d 1327 (Fed. Cir. 2016). The claimed self-referential table in Enfish was a specific type of data structure designed to improve the way a computer stores and retrieves data in memory. Enfish, 822 F.3d at 1339. To the extent Applicant contends that the claimed invention uses such a data structure to improve a computer’s functionality or efficiency, or otherwise change the way that device functions, the claims do not recite limitations directed to such improvements. Nor is Applicant’s claimed invention analogous to that which the court held eligible in DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1257 (Fed. Cir. 2014). In DDR, instead of a computer network operating in its normal, expected manner by sending a website visitor to a third-party website apparently connected with a clicked advertisement, the claimed invention in DDR generated and directed the visitor to a hybrid page that presented (1) product information from the third party and (2) visual “look and feel” elements from the host website. DDR, 773 F.3d at 1258–59. Given this particular Internet-based solution, the court held that the claimed invention did not merely use the Internet to perform a business practice known from the pre-Internet world, but rather was necessarily rooted in computer technology to overcome a problem specifically arising in computer networks. Id. at 1257. The claimed invention here is not necessarily rooted in computer technology in the sense contemplated by DDR where the claimed invention solved a challenge particular to the Internet. Although Applicant’s invention uses computing components, the claimed invention does not solve a challenge particular to those components that are used to implement this functionality. The court in Bascom Global Internet Services Inc. v. AT&T Mobility LLC, 827 F.3d 1341 (Fed. Cir. 2016) determined that an inventive concept may be found in a non-conventional and non-generic arrangement of components that are individually well-known and conventional. Bascom, 828 F.3d at 1350. The independent claims do not recite an improvement to a technology, but rather addresses a business issue of attempting to optimize the selection of digital content presented to a specific user. As a result, the 35 U.S.C. 101 rejection is proper, maintained, and updated below, as necessitated by the amendment made to claim 20.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 - 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claim 1 recites a device, independent 11 recites a process, and independent 20 recites a product to generate a clean dataset associated with a customer profile of a plurality of customer profiles. The claims are directed to an abstract idea of obtaining and analyzing customer profile data to generate and display a digital marketing campaign, without significantly more. Independent claims 1, 11, and 20 recite substantially similar limitations.
Taking independent claim 1 as representative, claim 1 recites at least the following limitations:
obtain constraint data, the constraint data including data that identifies and characterizes a plurality of constraints and data that, for each of the plurality of constraints, identifies and characterizes a global distribution associated with a corresponding constraint;
obtain customer profile data of a plurality of customers associated with the system, the customer profile data comprising a customer identifier;
for each customer of the plurality of customers: determine a portion of the customer profile data that corresponds to the global distribution associated with each of the plurality of constraints;
compare the portion of the customer profile data of the customer with the global distribution associated with each of the plurality of constraints;
for each comparison, generate one of a plurality of values for a score and associate the score with the corresponding one of the plurality of constraints;
based on the score of each of the one or more constraints, implement operations that generate an overall score, the overall score indicating a closeness between the customer profile data of the customer to at least the global distribution of each of the one or more constraints; and
based on the customer identifier associate the overall score with a customer profile of the customer, and store the customer profile and the overall score within a dataset database;
implement operations that automatically generates a clean dataset based on the overall score associated with a customer profile of each of the plurality of customers by aggregating, for each customer profile, normalized scores of each constraint, or by (ii) defragmenting operations that include normalizing the scores of each constraint associated with the customer profiles of each of the plurality of customers;
transmit the clean dataset to a plurality of computing systems, wherein each of the plurality of computing systems is associated with a particular channel of an e- commerce entity;
store the clean dataset at the dataset database;
extract, by each of the plurality of computing systems, insights from the clean dataset by identifying one of more features in the constraint data such that a first difference between cumulative high dimensional data in the clean dataset and global distributions associated with the one or more constraints is smaller than a second difference between high-dimensional data of the customer profile data and the global distributions associated with the one or more constraints; and
implement a set of operations associated with the particular channel of the e- commerce entity based on the extracted insights to personalize a user experience of a user on the particular channel, including generating a personalized digital content campaign for the user based on the extracted insights and displaying the personalized digital content campaign to the user based on the extracted insights.
Claim 20 recites the following limitations:
obtain constraint data, the constraint data including data that identifies and characterizes a plurality of constraints and data that, for each of the plurality of constraints, identifies and characterizes a global distribution associated with a corresponding constraint; obtain customer profile data of a plurality of customers associated with the system, the customer profile data comprising a customer identifier;
for each customer of the plurality of customers: determine a portion of the customer profile data that corresponds to the global distribution associated with each of the plurality of constraints;
compare the portion of the customer profile data of the customer with the global distribution associated with each of the plurality of constraints;
for each comparison, generate one of a plurality of values for a score and associate the score with the corresponding one of the plurality of constraints;
based on the score of each of the one or more constraints, implement operations that generate an overall score, the overall score indicating a closeness between the customer profile data of the customer to at least the global distribution of each of the one or more constraints; and
based on the customer identifier, associate the overall score with a customer profile of the customer, and store the customer profile and the overall score within a dataset database;
implement operations that automatically generates a clean dataset by performing defragmenting operations that include normalizing the scores of each constraint associated with the customer profiles of each of the plurality of customers, and selecting, based on a normalized overall score of each customer profile, a subset of the plurality of customers that satisfies a constraint-consistency threshold, the clean dataset including at least one or more portions of customer profile data of each customer in the selected subset;
implement a verification process that determines an accuracy of the clean dataset, the determined accuracy being used to determine whether to repeat the automatic generation of the clean dataset using a different subset of the plurality of customers;
transmit, in response to the determined accuracy of the clean dataset falling within a predetermined acceptable margin of error, the clean dataset to a plurality of computing systems, wherein each of the plurality of computing systems is associated with a particular channel of an e-commerce entity;
store the clean dataset at the dataset database;
extract, by each of the plurality of computing systems, insights from the clean dataset by identifying one of more features in the constraint data such that a first difference between cumulative high dimensional data in the clean dataset and global distributions associated with the one or more constraints is smaller than a second difference between high-dimensional data of the customer profile data and the global distributions associated with the one or more constraints; and
implement a set of operations associated with the particular channel of the e-commerce entity based on the extracted insights to personalize a user experience of a user on the particular channel, including generating a personalized digital content campaign for the user based on the extracted insights and displaying the personalized digital content to the user.
Under Step 1 independent claims 1, 11, and 20 recite at least one step or act, including obtaining constraint data.
Under Step 2A Prong One, the limitations recited in claim 1 for obtaining constraint data; obtaining customer profile data; determining a portion of the customer profile data that corresponds to the global distribution; comparing the portion of the customer profile data of the customer with the global distribution; generating one of a plurality values for a score and associating the score with the corresponding one of the plurality of constraints; implementing operations that generate an overall score; associating the overall score with a customer profile of the customer; implementing operations that automatically generate a clean dataset; transmitting the clean dataset; storing the clean dataset; extracting insights from the clean dataset by identifying one of more features in the constraint data; implementing a set of operations associated with a particular channel of the e-commerce entity; and generating a personalized digital content campaign, as well as the additional limitations recited in amended claim 20 for performing defragmenting operations that include normalizing scores of each constraint associated with the customer profiles; selecting a subset of the plurality of customers that satisfies a constraint-consistency threshold; and implementing a verification process that determines an accuracy of the clean dataset, as drafted, fall within the fundamental economic principles or practices grouping of abstract ideas because the claimed subject matter is direct to steps to analyze consumer data in relation to various business data sets to determine what digital content most aligns with a customer profile for the business purpose of improving customer relationship management and enterprise sale through personalized marketing, which the MPEP identifies as a method of organizing human activity. The claimed subject matter involves managing and analyzing customer profile and sales data (per para. [0055-0058] of the Specification: online-transactional data and/or in-store transactional data … “clean data computing device 102 may determine, for each of the set of customer profiles, a week-to-week, during the three-year time period, total purchase amount related to food items). The focus of the claims as a whole is on managing the personal behavior and commercial interactions of people (e.g., consumers, advertisers, retailers, and product sponsors) for advertising, marketing, and sales activities and behaviors, which fall within the certain methods of organizing human activity grouping of abstract ideas. Therefore, the claims recite an abstract idea.
The claimed steps are directed to evaluation and judgment based on collected constraint and customer profile data, generating the score, overall score, and clean dataset as an output of the evaluation and analysis of the constraint and customer profile data. The step for extracting insights involves mental judgments by comparing data in the clean dataset with data of the customer profile and constraint data. Because the claims are directed to obtaining and analyzing ecommerce customer profile data to make business determinations using steps that could be performed by a data analyst or a marketing and sales specialist without the use of a computer, and further because a human could make the determination of the value of the business related data (cleansing, scoring, aggregating, normalizing, and comparing), the claims are properly categorized as falling within the mental processes grouping of abstract ideas. See MPEP 2106.04(a)(2)(III). The steps for generating, normalizing, and aggregating scores are mathematical operations that fall within the mathematical concepts grouping of abstract ideas. See MPEP § 2106.04(a)(2)(I).
The limitations for obtaining constraint data, obtaining customer profile data, transmitting the clean dataset, and extracting insights from the clean dataset amount to insignificant extra-solution data gathering steps that are used to provide input for the data processing steps. Gathering data and displaying an output by a computer are reasonably construed as insignificant extra solution activity because the data gathering and transmitting steps merely provide input for the recited data processing steps, and outputting a result or performing a generic operation with an intent to display customized information does not imposes meaningful limits on the claim. See MPEP 2106.05(g).
Under Step 2A Prong Two the judicial exception of the independent claims is not integrated into a practical application. In particular the claims recite a processor, memory, and database for performing the recited steps. These elements are recited at a high level of generality (i.e., as a generic processor performing a generic computer function) and amount to no more than mere instructions to apply the exception using generic computer components. See MPEP 2106.05(f). For example, Applicant’s specification at paragraphs [0064-0065] states: “Clean data computing device 102 can include one or more processors 202, working memory 204, one or more input/output devices 206,instruction memory 208… Processors 202 can include one or more distinct processors, each having one or more cores. Each of the distinct processors can have the same or different structure. Processors 202 can include one or more central processing units (CPUs), …, and the like.” Adding generic computer components to perform generic functions, such as data gathering, performing calculations, and outputting a result would not transform the claim into eligible subject matter. See MPEP 2106.05(h). The claims do not provide technical details regarding how the set of operations associated with the particular channel of the e-commerce entity is implemented, and only describe displaying personalized digital content as a form of implementation. Use of a machine that contributes only nominally or insignificantly to the execution of the claimed method (e.g., in a data gathering step or in a field-of-use limitation) would not integrate a judicial exception or provide significantly more. The claim merely amounts to the application or instructions to apply the abstract idea on a computer, and is considered to amount to nothing more than requiring a generic computer system (e.g. a computer system comprising a generic database to merely carry out the abstract idea itself. Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea but merely use the computer as a tool to perform the data analysis steps.
While the data processing steps narrow how the clean dataset is generated and validated before extracting insights for digital content campaign generation, filtering, cleansing, and validating a dataset for processing is a form of data management that does not improve the functioning of a computer or another technology. The claim limitations are not directed to an improvement to the recited processors/computing systems and memory. These additional elements are broadly and generically claims as tools used to implement to data processing and output steps. The Specification at [para. 0091] states: “… the de-fragmentation operations remove or lessen the effects of fragmentation-related impurities (e.g., lessen the chance fragmented customer profiles of one or more customers are included in the clean dataset 321). In some examples, executed defragmentation engine 305 may implement the defragmentation operations that include normalizing the scores of each constraint associated with each of the set of customer profiles. In some instances, normalizing the scores may include normalizing the size of each discrete bucket of the global distributions of each constraint. For instance, based on the constraint data 316, executed defragmentation engine 305 may determine the actual distribution of the set of customers that have the associated score, and then normalize the score of a particular customer utilizing the determined actual distribution (e.g., dividing the score of the customer with the actual distribution).” As described in the specification the defragmentation and normalizing functions are performed on the “obtained” data as data processing steps, not as steps that improve the functioning of the computer, processor, or memory. Reducing the size of a dataset using constraints, rules, and other data associations is part of the recited abstract idea and does not confer subject matter eligibility.
Under Step 2B the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of a processor, database, and storage device amount to no more than mere instructions to apply the exception using a generic computer component which cannot provide an inventive concept.
Dependent claims 2 through 10 and 12 through 19 include the abstract ideas of the independent claims. The limitations of the dependent claims merely narrow the mental process/ fundamental economic practice by describing have the customer profile data is analyzed or used to generate a score for generating a clean dataset. The limitations of the dependent claims are not integrated into a practical application because none of the additional elements set forth any limitations that meaningfully limit the abstract idea implementation. Therefore the claims are directed to an abstract idea. There are no additional elements that transform the claim into a patent eligible idea by amounting to significantly more. The analysis above applies to all statutory categories of invention. Accordingly independent claims 11 and 20 and the claims that depend therefrom are rejected as ineligible for patenting under 35 U.S.C. 101 based upon the same analysis applied to claim 1 above. Therefore claims 1 - 20 are ineligible under 35 U.S.C. 101.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Rauen et al. (US 2004/0015408) - efficient storage, management, and delivery of corporate content in response to orders for such content. The CCMD system includes a first module configured to create digital and/or acquire digital content for repurposing in a digital and/or a physical format. A second module, which is electronically coupled to the first module, manages data necessary to process and execute orders for such corporate content in one of digital format and physical format.
Mangipudi et al. (US 2016/0171540) - The CDTS identifies target users using user characteristic information, item information, and event information received from information sources. The CDTS determines a target item based on each item's item relevancy score and item relevancy priority generated based on an analysis of the user characteristic information, the item information, the event information, user intent dimensions, user interest dimensions, and global interest dimensions. The CDTS generates and targets relevant content associated with the target item and service information to the identified target users based on recommendations dynamically generated based on a content relevancy score, a content relevancy priority, targeting triggers, and the analysis of the user characteristic information, the item information, and the event information using content selection parameters.
Roberts et al. (US 2024/0362194) - extracting bulk data by generating with a secure agent a transfer request for transfer of the bulk data; generating with a content management unit a bulk data extraction job having a job ID associated therewith in response to the transfer request and then transferring the job ID to the secure agent; generating a programmatic call using the job ID with the secure agent requesting data files including a manifest file; generating with the secure agent a search request for searching the manifest file for selected information; retrieving the manifest file with the content management unit in response to the search request; searching and parsing the manifest file with the content management unit to identify and retrieve the data files corresponding to the job ID; and transferring the data files associated with the job ID with the content management unit to a data extraction unit.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LETORIA G KNIGHT whose telephone number is (571)270-0485. The examiner can normally be reached M-F 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao WU can be reached at 571-272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/L.G.K/Examiner, Art Unit 3623 /RUTAO WU/Supervisory Patent Examiner, Art Unit 3623