Prosecution Insights
Last updated: April 19, 2026
Application No. 17/525,744

FACILITATING GENERATION OF REPRESENTATIVE DATA

Non-Final OA §103
Filed
Nov 12, 2021
Examiner
GRUSZKA, DANIEL PATRICK
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Adobe Inc.
OA Round
3 (Non-Final)
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
32 currently pending
Career history
32
Total Applications
across all art units

Statute-Specific Performance

§101
38.3%
-1.7% vs TC avg
§103
42.3%
+2.3% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
7.4%
-32.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed 11/13/2025 which provides amendments to claims 1, 10, 18 has been entered. Claims 1-20 remain pending in the application. Applicant’s amendments claims overcomes the 101 rejection. Response to Arguments Applicant’s arguments with respect to 35 U.S.C § 103 filed 11/13/2025 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-20 are rejected under 35 U.S.C 103 as being unpatentable over Forgeat (US 2022/0076066 A1) in view of Huang (NPL: ‘Context-Aware Generative Adversarial Privacy’ (published 2017)). Regarding claim 1, Forgeat teaches: obtaining an original dataset for which a data representation is to be generated; ([0032] “Generative models are models trained on a training data set obtained from samples of an original data set.”) training a data generation model to generate a representative dataset that represents the original dataset, wherein the data generation model is trained based on the original dataset, ([0044] “the first machine learning process can include a first neural network that is trained to produce synthetic data and a second neural network that is trained to discriminate between synthetic data and real data” and [0035] “The accuracy budget (i.e., a first threshold for accuracy) of the synthetic data (compared to the real data) as well as the privacy budget (i.e., a second threshold for PII data recovery) are also parameters that can be set by the operator (i.e., the data owner).”) generating, via the trained data generation model, the representative dataset that represents the original dataset, wherein the generated representative dataset maintains a set of desired statistical properties of the original dataset, maintains an extent of data privacy of the set of original data, and maintains an extent of data value of the set of original data. ([0030] “The embodiments offer such a technological solution. The embodiments provide a process to generate synthetic data from the actual collected network operator data where the synthetic data is anonymized to provide privacy, but the synthesized network operator data maintains other essential properties that make it useful for research and analysis related to machine learning.”). Forgeat does not teach: a set of privacy settings indicating privacy of data associated with the original dataset, and a set of value settings indicating value of data associated with the original dataset the data generation model comprises a generator of a generative adversarial network (GAN) model that further includes a discriminator that adjusts the generator using an objective function that incorporates a data privacy regulator, wherein the set of value settings comprises information identifying one or more features of the original dataset that are designated as important and the set of privacy settings comprises information identifying one or more features of the original dataset designated as being privacy sensitive, wherein the objective function rewards the data generator model based on satisfying the set of value setting and penalizes the data generator model using an exponential loss function based on violations of the set of privacy settings. However, Huang does: a set of privacy settings indicating privacy of data associated with the original dataset, and a set of value settings indicating value of data associated with the original dataset (Section 1.1. Our contributions “Each row in D contains both private variables (represented by Y) and public variables (represented by X). The goal of the data holder is to generate ˆX in a way such that: (a) ˆX is as good of a representation of X as possible; and (b) an adversary cannot use ˆX to reliably infer Y.” private variables in the dataset are the privacy settings and the goal of generating a good representation can be seen as value settings) the data generation model comprises a generator of a generative adversarial network (GAN) model that further includes a discriminator that adjusts the generator using an objective function that incorporates a data privacy regulator, (Section 1.1. Our Contributions “At the core of GAP is a loss function (We quantify the adversary’s performance via a loss function and the quality of the released data via a distortion function) that captures how well an adversary does in terms of inferring the private variables.”) wherein the set of value settings comprises information identifying one or more features of the original dataset that are designated as important and the set of privacy settings comprises information identifying one or more features of the original dataset designated as being privacy sensitive, (Section 1.1. Our contributions “Each row in D contains both private variables (represented by Y) and public variables (represented by X). The goal of the data holder is to generate ˆX in a way such that: (a) ˆX is as good of a representation of X as possible; and (b) an adversary cannot use ˆX to reliably infer Y.” and “The privatizer and adversary achieve their goals by competing in a constrained minimax, zero-sum game. On the one hand, the privatizer (a conditional generative model) is designed to minimize the adversary’s performance in inferring Y reliably. On the other hand, the adversary (a classifier) seeks to find the best inference strategy that maximizes its performance.”) wherein the objective function rewards the data generator model based on satisfying the set of value setting and penalizes the data generator model using an exponential loss function based on violations of the set of privacy settings. (Section 2.1. Formulation “On the one hand, the data holder would like to find a privacy mechanism g that is both privacy preserving (in the sense that it is difficult for the adversary to learn Y from ˆX) and utility preserving (in the sense that it does not distort the original data too much). On the other hand, for a fixed choice of privacy mechanism g, the adversary would like to find a (potentially randomized) function h that minimizes its expected loss, which is equivalent to maximizing the negative of the expected loss.” Making sure the original data is not distort is a type of reward. Also Section 2.2. GAP Under various Loss Functions “Thus, under the log-loss in (6), GAP is equivalent to using MI as the privacy metric” a log loss function is a type of exponential function.) Forgeat and Huang are considered analogous art to the claimed invention because they are in the same field of endeavor being synthetic data generation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine data generation system of Forgeat with the objective function of Huang. One would want to do this to be able to publish data that guarantees privacy and utility (Huang introduction). Regarding claim 2, Forgeat in view of Huang teaches claim 1 as outlines above. Forgeat further teaches: the original dataset is in the form of a matrix including rows having data associated with individuals and columns representing different features related to the individuals (paragraph 43, describes the data being in table format). Regarding claim 3, Forgeat in view of Huang teaches claim 1 as outlines above. Huang further teaches: the set of privacy settings including a level of privacy desired for each quasi-identifier feature in the original dataset ( Section 2. Generative Adversarial Privacy Model “We consider a dataset D which contains both public and private variables for n individuals (see Figure 1). We represent the public variables by a random variable X ∈ X, and the private variables (which are typically correlated with the public variables) by a random variable Y ∈ Y.”) Regarding claim 4, Forgeat in view of Huang teaches claim 1 as outlines above. Forgeat further teaches: numerical and categorical attributes in the original dataset are normalized for use in training the data generation model ([0043] “The process can organize the collected data into a standardized format (Block 103)”). Regarding claim 6, Forgeat in view of Huang teaches claim 1 as outlines above. Huang further teaches: the generator that attempts to produce data points similar to original data points of the original dataset and a discriminator that attempts to minimize a distance between the original dataset and synthetic data (Section 1.1. Our Contributions “It includes two learning blocks: a privatizer, whose task is to output a sanitized version of the public variables (subject to some distortion constraints); and an adversary, whose task is to learn the private variables from the sanitized data. The privatizer and adversary achieve their goals by competing in a constrained minimax, zero-sum game.”) Regarding claim 7, Forgeat in view of Huang teaches claim 1 as outlines above. Huang further teaches: the objective function incorporates the set of privacy settings to penalize a generator of the data generation model if generated representative data is too close to original data of the original dataset and incorporates the set of value settings to reward the generator when it performs well on value-add features. (Section 2.1. Formulation “On the one hand, the data holder would like to find a privacy mechanism g that is both privacy preserving (in the sense that it is difficult for the adversary to learn Y from ˆX) and utility preserving (in the sense that it does not distort the original data too much). On the other hand, for a fixed choice of privacy mechanism g, the adversary would like to find a (potentially randomized) function h that minimizes its expected loss, which is equivalent to maximizing the negative of the expected loss. To achieve these two opposing goals, we model the problem as a constrained minimax game between the privatizer and the adversary”) Regarding claim 8, Forgeat in view of Huang teaches claim 1 as outlines above. Forgeat further teaches: providing the generated representative dataset for use in performing a machine learning task. ([0030] “The embodiments offer such a technological solution. The embodiments provide a process to generate synthetic data from the actual collected network operator data where the synthetic data is anonymized to provide privacy, but the synthesized network operator data maintains other essential properties that make it useful for research and analysis related to machine learning.”). Regarding claim 9, Forgeat in view of Huang teaches claim 1 as outlines above. Forgeat further teaches: the set of privacy settings is obtained via a data provider and the set of value settings is obtained via a data recipient ([0036] “the embodiments have the following advantages, they allow sharing of sensitive data (or a synthetic version thereof) for training machine learning applications by parties that do not have access to the real data (e.g. for privacy reasons), allow generation of realistic synthetic data, allow the network operator (or similar data owner) to determine the accuracy budget (how realistic the data is), allow the network operator (or similar data owner) to flexibly set the privacy budget on the sensitive data, guarantees the privacy budget, and give the network operators an opportunity to share their data in a data market.”) Regarding claim 10, Forgeat teaches: One or more non-transitory computer-readable media having a plurality of executable instructions embodied thereon, which, when executed by one or more processors, cause the one or more processors to perform a method for facilitating representative data generation, ([0007] “The computer system includes a non-transitory machine-readable medium” and [0032] “Generative models are models trained on a training data set obtained from samples of an original data set.”) obtaining a set of original data for which a data representation is to be generated; ([0032] “Generative models are models trained on a training data set obtained from samples of an original data set.”) generating, via a trained data generation model, a set of representative data representing the set of original data, wherein the set of representative data maintains an extent of data privacy and an extent of value based on the trained data generation model being trained using a privacy constraint and a value constraint ([0030] “The embodiments offer such a technological solution. The embodiments provide a process to generate synthetic data from the actual collected network operator data where the synthetic data is anonymized to provide privacy, but the synthesized network operator data maintains other essential properties that make it useful for research and analysis related to machine learning.”). providing the set of representative data for use in performing a subsequent machine learning task. ([0030] “The embodiments offer such a technological solution. The embodiments provide a process to generate synthetic data from the actual collected network operator data where the synthetic data is anonymized to provide privacy, but the synthesized network operator data maintains other essential properties that make it useful for research and analysis related to machine learning.”). Forgeat does not teach: the data generation model comprises a generator of a generative adversarial network (GAN) model that further includes a discriminator that adjusts the generator using an objective function that incorporates a data privacy regulator, wherein the extent of value settings comprises information identifying one or more features of the original dataset that are designated as important and the extent of privacy settings comprises information identifying one or more features of the original dataset designated as being privacy sensitive, wherein the objective function rewards the data generator model based on satisfying the extent of value setting and penalizes the data generator model using an exponential loss function based on violations of the extent of data privacy. However, Huang does: the data generation model comprises a generator of a generative adversarial network (GAN) model that further includes a discriminator that adjusts the generator using an objective function that incorporates a data privacy regulator, (Section 1.1. Our Contributions “At the core of GAP is a loss function (We quantify the adversary’s performance via a loss function and the quality of the released data via a distortion function) that captures how well an adversary does in terms of inferring the private variables.”) wherein the extent of value settings comprises information identifying one or more features of the original dataset that are designated as important and the extent of privacy settings comprises information identifying one or more features of the original dataset designated as being privacy sensitive, (Section 1.1. Our contributions “Each row in D contains both private variables (represented by Y) and public variables (represented by X). The goal of the data holder is to generate ˆX in a way such that: (a) ˆX is as good of a representation of X as possible; and (b) an adversary cannot use ˆX to reliably infer Y.” and “The privatizer and adversary achieve their goals by competing in a constrained minimax, zero-sum game. On the one hand, the privatizer (a conditional generative model) is designed to minimize the adversary’s performance in inferring Y reliably. On the other hand, the adversary (a classifier) seeks to find the best inference strategy that maximizes its performance.”) wherein the objective function rewards the data generator model based on satisfying the extent of value setting and penalizes the data generator model using an exponential loss function based on violations of the extent of data privacy. (Section 2.1. Formulation “On the one hand, the data holder would like to find a privacy mechanism g that is both privacy preserving (in the sense that it is difficult for the adversary to learn Y from ˆX) and utility preserving (in the sense that it does not distort the original data too much). On the other hand, for a fixed choice of privacy mechanism g, the adversary would like to find a (potentially randomized) function h that minimizes its expected loss, which is equivalent to maximizing the negative of the expected loss.” Making sure the original data is not distort is a type of reward. Also Section 2.2. GAP Under various Loss Functions “Thus, under the log-loss in (6), GAP is equivalent to using MI as the privacy metric” a log loss function is a type of exponential function.) Forgeat and Huang are considered analogous art to the claimed invention because they are in the same field of endeavor being synthetic data generation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine data generation system of Forgeat with the objective function of Huang. One would want to do this to be able to publish data that guarantees privacy and utility (Huang introduction). Regarding claim 11, Forgeat in view of Huang teaches claim 10 as outlines above. Forgeat further teaches: the extent of data privacy maintained prevents a subsequent re-identification of an individual associated with the set of original data. ([0030] “The embodiments offer such a technological solution. The embodiments provide a process to generate synthetic data from the actual collected network operator data where the synthetic data is anonymized to provide privacy, but the synthesized network operator data maintains other essential properties that make it useful for research and analysis related to machine learning.”). Regarding claim 12, Forgeat in view of Huang teaches claim 10 as outlines above. Forgeat further teaches: the extent of value maintained enables a subsequent use of the set of representative data to perform the subsequent machine learning task with a similar outcome as to what would be achieved using the set of original data. ([0030] “The embodiments offer such a technological solution. The embodiments provide a process to generate synthetic data from the actual collected network operator data where the synthetic data is anonymized to provide privacy, but the synthesized network operator data maintains other essential properties that make it useful for research and analysis related to machine learning.”). Regarding claim 13, Forgeat in view of Huang teaches claim 10 as outlines above. Huang further teaches: the privacy constraint and the value constraint are incorporated into the objective function used to train the trained data generation model. (Section 2.1. Formulation “On the one hand, the data holder would like to find a privacy mechanism g that is both privacy preserving (in the sense that it is difficult for the adversary to learn Y from ˆX) and utility preserving (in the sense that it does not distort the original data too much). On the other hand, for a fixed choice of privacy mechanism g, the adversary would like to find a (potentially randomized) function h that minimizes its expected loss, which is equivalent to maximizing the negative of the expected loss.” Making sure the original data is not distort is a type of reward. Also Section 2.2. GAP Under various Loss Functions “Thus, under the log-loss in (6), GAP is equivalent to using MI as the privacy metric” a log loss function is a type of exponential function.) Regarding claim 14, Forgeat in view of Huang teaches claim 10 as outlines above. Huang further teaches: the privacy constraint is used to penalize the generator when the generator produces data too close to the set of original data. (Section 2.2. GAP Under various Loss Functions “Thus, under the log-loss in (6), GAP is equivalent to using MI as the privacy metric” a log loss function is a type of exponential function.) Regarding claim 15, Forgeat in view of Huang teaches claim 10 as outlines above. Huang further teaches: the privacy constraint includes a hyper-parameter used to modify effect the privacy constraint, and wherein the privacy constraint is based on at least one privacy setting indicated by a provider of the set of original data. (Section 2. Generative Adversarial Privacy Model “We consider a dataset D which contains both public and private variables for n individuals (see Figure 1). We represent the public variables by a random variable X ∈ X, and the private variables (which are typically correlated with the public variables) by a random variable Y ∈ Y.”) Regarding claim 16, Forgeat in view of Huang teaches claim 10 as outlines above. Huang further teaches: the value constraint is used to reward the generator for producing data close to the set of original data in relation to salient features. (Section 2.1. Formulation “On the one hand, the data holder would like to find a privacy mechanism g that is both privacy preserving (in the sense that it is difficult for the adversary to learn Y from ˆX) and utility preserving (in the sense that it does not distort the original data too much). On the other hand, for a fixed choice of privacy mechanism g, the adversary would like to find a (potentially randomized) function h that minimizes its expected loss, which is equivalent to maximizing the negative of the expected loss.” Making sure the original data is not distort is a type of reward.) Regarding claim 17, Forgeat in view of Huang teaches claim 10 as outlines above. Huang further teaches: the value constraint includes a hyper-parameter used to modify effect of the value constraint. (Section 1.1. Our contributions “The goal of the data holder is to generate ˆX in a way such that: (a) ˆX is as good of a representation of X as possible; and (b) an adversary cannot use ˆX to reliably infer Y.” private variables in the dataset are the privacy settings and the goal of generating a good representation can be seen as value settings) Regarding claim 18, Forgeat teaches: a computer system comprising: one or more processors; and ([0007] “The computer system includes a non-transitory machine-readable medium having stored therein a data collector and a data synthesizer, and a processor coupled to the non-transitory machine-readable medium, the processor to execute the data collector and the data synthesizer”) one or more non-transitory computer-readable storage media, coupled with the one or more processors, having instructions stored thereon, which, when executed by the one or more processors, cause the computing system to: obtain an original dataset for which a data representation is to be generated; ([0007] “The computer system includes a non-transitory machine-readable medium” and [0032] “Generative models are models trained on a training data set obtained from samples of an original data set.”) train a generative adversarial network (GAN) model to generate a representative dataset that represents the original dataset and maintains a level of privacy and value in the representative dataset, wherein the GAN model, including a generator and a discriminator, is trained by: ([0044] “the first machine learning process can include a first neural network that is trained to produce synthetic data and a second neural network that is trained to discriminate between synthetic data and real data” and [0035] “The accuracy budget (i.e., a first threshold for accuracy) of the synthetic data (compared to the real data) as well as the privacy budget (i.e., a second threshold for PII data recovery) are also parameters that can be set by the operator (i.e., the data owner).”) the generator generating synthetic data in a same form as the original dataset, and ([0030] “The embodiments offer such a technological solution. The embodiments provide a process to generate synthetic data from the actual collected network operator data where the synthetic data is anonymized to provide privacy, but the synthesized network operator data maintains other essential properties that make it useful for research and analysis related to machine learning.”). Forgeat does not teach: the discriminator using an objective function to train the generator based on the generated synthetic data, wherein the objective function incorporates a privacy constraint to maintain privacy of the generated synthetic data and a value constraint to maintain value of the generated synthetic data, wherein the value constraint comprises information identifying one or more features of the original dataset that are designated as important and the privacy constraint comprises information identifying one or more features of the original dataset designated as being privacy sensitive, and wherein the objective function rewards the generator based on satisfying the value constraint and penalizes the generator using an exponential loss function based on violations of privacy constraint. However, Huang does: the discriminator using an objective function to train the generator based on the generated synthetic data, wherein the objective function incorporates a privacy constraint to maintain privacy of the generated synthetic data and a value constraint to maintain value of the generated synthetic data, (Section 2.1. Formulation “On the one hand, the data holder would like to find a privacy mechanism g that is both privacy preserving (in the sense that it is difficult for the adversary to learn Y from ˆX) and utility preserving (in the sense that it does not distort the original data too much). On the other hand, for a fixed choice of privacy mechanism g, the adversary would like to find a (potentially randomized) function h that minimizes its expected loss, which is equivalent to maximizing the negative of the expected loss.” Making sure the original data is not distort is a type of reward. Also Section 2.2. GAP Under various Loss Functions “Thus, under the log-loss in (6), GAP is equivalent to using MI as the privacy metric” a log loss function is a type of exponential function.) wherein the value constraint comprises information identifying one or more features of the original dataset that are designated as important and the privacy constraint comprises information identifying one or more features of the original dataset designated as being privacy sensitive, and (Section 1.1. Our contributions “Each row in D contains both private variables (represented by Y) and public variables (represented by X). The goal of the data holder is to generate ˆX in a way such that: (a) ˆX is as good of a representation of X as possible; and (b) an adversary cannot use ˆX to reliably infer Y.” and “The privatizer and adversary achieve their goals by competing in a constrained minimax, zero-sum game. On the one hand, the privatizer (a conditional generative model) is designed to minimize the adversary’s performance in inferring Y reliably. On the other hand, the adversary (a classifier) seeks to find the best inference strategy that maximizes its performance.”) wherein the objective function rewards the generator based on satisfying the value constraint and penalizes the generator using an exponential loss function based on violations of privacy constraint. (Section 2.1. Formulation “On the one hand, the data holder would like to find a privacy mechanism g that is both privacy preserving (in the sense that it is difficult for the adversary to learn Y from ˆX) and utility preserving (in the sense that it does not distort the original data too much). On the other hand, for a fixed choice of privacy mechanism g, the adversary would like to find a (potentially randomized) function h that minimizes its expected loss, which is equivalent to maximizing the negative of the expected loss.” Making sure the original data is not distort is a type of reward. Also Section 2.2. GAP Under various Loss Functions “Thus, under the log-loss in (6), GAP is equivalent to using MI as the privacy metric” a log loss function is a type of exponential function.) Forgeat and Huang are considered analogous art to the claimed invention because they are in the same field of endeavor being synthetic data generation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine data generation system of Forgeat with the objective function of Huang. One would want to do this to be able to publish data that guarantees privacy and utility (Huang introduction). Regarding claim 19, Forgeat in view of Huang teaches claim 18 as outlines above. Huang further teaches: the privacy constraint includes a hyper-parameter used to modify effect of the privacy constraint, and wherein the privacy constraint is based on at least one privacy setting indicated by a provider of the set of original data. (Section 2. Generative Adversarial Privacy Model “We consider a dataset D which contains both public and private variables for n individuals (see Figure 1). We represent the public variables by a random variable X ∈ X, and the private variables (which are typically correlated with the public variables) by a random variable Y ∈ Y.”) Regarding claim 20, Forgeat in view of Huang teaches claim 18 as outlines above. Huang further teaches: the value constraint includes a hyper-parameter used to modify effect of the value constraint, and wherein the value constraint is based on at least one value setting indicated by an intended recipient of the representative dataset. (Section 1.1. Our contributions “The goal of the data holder is to generate ˆX in a way such that: (a) ˆX is as good of a representation of X as possible; and (b) an adversary cannot use ˆX to reliably infer Y.” private variables in the dataset are the privacy settings and the goal of generating a good representation can be seen as value settings) Claim 5 is rejected under 35 U.S.C 103 as being unpatentable over Forgeat in view of Huang and Katzmann (US 2019/0370969 A1) Regarding claim 5, Forgeat in view of Huang teaches claim 1 as outlined above. Neither Forgeat or Huang teaches the value settings are represented via saliency map indicating measures of impact various attributes associated with the original dataset have on performance of a subsequent machine learning task. However, Katzmann teaches representing data via saliency map to measure the impact various aspects of the original dataset has (paragraphs 152 and 153) Forgeat, Huang, and Katzmann are considered analogous art to the claimed invention because they are in the same field of endeavor of generating synthetic data using a generative adversarial network for the purpose of sharing data and maintain a level of data privacy. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have represented the accuracy budget (equivalent to value settings) of Forgeat via the saliency map of Katzmann in order to enable a target-specific visualization of input regions predictive for specific outcome estimates (Katzmann, [0152]-[0153]). One of ordinary skill in the art would have been motivated to use a saliency map for data representation to visualize the important aspects of the dataset. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL PATRICK GRUSZKA whose telephone number is (571)272-5259. The examiner can normally be reached M-F 9:00 AM - 6:00 PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached at (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL GRUSZKA/Examiner, Art Unit 2121 /Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Nov 12, 2021
Application Filed
Mar 11, 2025
Non-Final Rejection — §103
May 20, 2025
Interview Requested
May 28, 2025
Examiner Interview (Telephonic)
May 28, 2025
Examiner Interview Summary
Jun 09, 2025
Response Filed
Sep 02, 2025
Final Rejection — §103
Oct 21, 2025
Interview Requested
Oct 29, 2025
Applicant Interview (Telephonic)
Oct 29, 2025
Examiner Interview Summary
Nov 13, 2025
Response after Non-Final Action
Dec 15, 2025
Request for Continued Examination
Jan 01, 2026
Response after Non-Final Action
Feb 06, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
High
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month