Prosecution Insights
Last updated: April 19, 2026
Application No. 18/819,103

DIFFUSION-BASED VIDEO COMMUNICATION AND STREAMING

Non-Final OA §103§112
Filed
Aug 29, 2024
Examiner
GOCO, JOHN PATRICK
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Ikin Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
8 currently pending
Career history
8
Total Applications
across all art units

Statute-Specific Performance

§103
68.8%
+28.8% vs TC avg
§102
18.8%
-21.2% vs TC avg
§112
12.5%
-27.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections 2. Claim 5 objected to because of the following informalities: the phrase "...corresponds to one of one of compressed representations...". For the purposes of this examination, the phrase will be read as "...corresponds to one of compressed representations...". Appropriate correction is required. 3. Claim 13 objected to because of the following informalities: the phrase "...corresponds to one of one of compressed representations...". For the purposes of this examination, the phrase will be read as "...corresponds to one of compressed representations...". Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 4. Claims 3 and 11 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. 5. Claim 3 recites the limitation "the second data of data" in line 1 of the claim. There is insufficient antecedent basis for this limitation in the claim. It is unclear if it is intended to be read “second set of data” similar to in claim 2. For the purposes of examination, the claim will be interpreted as “… second set of data …”. 6. Claim 11 recites the limitation "the second data of data in line 1 of the claim. There is insufficient antecedent basis for this limitation in the claim. It is unclear if it is intended to be read “second set of data” similar to in claim 10. For the purposes of examination, the claim will be interpreted as “… second set of data …”. Claim Rejections - 35 USC § 103 8. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 9. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 10. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 11. Claims 1-5 and 9-13 are rejected under 35 U.S.C. 103 as being unpatentable over US 20230074979 A1 (Brehmer et al, hereinafter Brehmer) in view of US 20240185588 A1 (Kumari et al, hereinafter Kumari). Regarding claim 1, Brehmer teaches A computer-implemented method for generating image sequences, the method comprising: (Par 216 “Processes and methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer-readable media.”) receiving, at a computing device, values of a set of fine-tuning weights for a pre-trained diffusion model (Par 45 “the subspace or manifold of model parameters includes a portion of the weight vectors”, Par 47 “In some cases, the bitstream can also include a compressed version of the one or more subspace coordinates that correspond to the set of updated model parameters. In some cases, the receiving device can receive the bitstream”) wherein the fine-tuning weights are generated by training a first artificial neural network using training frames of training image data (Par 52 “The weight values may initially be determined by an iterative flow of training data”, Par 53 “A GAN can include two neural networks that operate together. One of the neural networks (referred to as a generative neural network or generator denoted as G(z)) generates a synthesized output, and the other neural network (referred to as an discriminative neural network or discriminator denoted as D(X)) evaluates the output for authenticity (whether the output is from an original dataset, such as the training dataset, or is generated by the generator). The training input and output can include images as an illustrative example. The generator is trained to try and fool the discriminator into determining a synthesized image generated by the generator is a real image from the dataset.”) in combination with a first set of data derived from the frames of training image data (Par 190 “the process 1300 can include generating a set of global model parameters based on a training dataset … In some cases, the training dataset may correspond to training data”), the first artificial neural network having … at least one trainable layer including the fine-tuning weights where the values of the fine-tuning weights are adjusted during the training. (Par 55 “These multi-layered architectures may be trained one layer at a time and may be fine-tuned using back propagation.”, Par 52 “The weight values may initially be determined by an iterative flow of training data”, Par 46 “the machine learning system can be fine-tuned (e.g., trained, fitted, etc.) for an instance of input data (e.g., an image, a video, a portion of a video, three-dimensional (3D) data, etc.) that is to be compressed and transmitted”, where the transmitting device implements the first artificial neural network.) establishing, at the computing device, a specialized diffusion model by inserting the values of the fine-tuning weights into an adaptable layer of a second artificial neural network having fixed-weight layers implementing the pre-trained diffusion model; (Par 46 “For example, fine-tuning of the neural network can include selecting a set of updated model parameters that correspond to a weight vector”, Par 47 “The decoder of the receiving device can use the one or more subspace coordinates to determine the updated model parameters for the neural network. A machine learning system (e.g., a neural network, such as an RD-AE or other neural network) of the decoder can use the updated model parameters to decode the compressed input data.”, Par 55 “These multi-layered architectures may be trained one layer at a time and may be fine-tuned using back propagation.”, Par 52 “The weight values may initially be determined by an iterative flow of training data”, where the receiving device implements the second artificial neural network.) receiving, at the computing device, a second set of data derived from frames of image data containing at least some scene information present in the training frames of training image data, the second set of data being provided to the second artificial neural network; (Par 61 “can transmit the compressed one or more images to a receiving device, and the receiving device can decompress the one or more compressed images efficiently using the machine learning based techniques described herein. As used herein, an image can refer to a still image and/or a video frame associated with a sequence of frames (e.g., a video)”, where the second set of data is the compressed image to be transmitted.) generating, by the second artificial neural network, images corresponding to the frames of image data; (Par 95 “The receiving device 420 can decompress the compressed image content, and can output the decompressed image content on the receiving device 420 (e.g., for display, editing, etc.)”, Par 96 “The image compression pipeline in the transmitting device 410 and the bitstream decompression pipeline in the receiving device 420 generally use one or more artificial neural networks to compress image content and/or decompress a received bitstream into image content, according to aspects of the present disclosure.”) wherein the first set of data includes less data than the frames of training image data (Par 46 “the machine learning system can be fine-tuned (e.g., trained, fitted, etc.) for an instance of input data (e.g., an image, a video, a portion of a video, three-dimensional (3D) data, etc.) that is to be compressed and transmitted to a receiving device that includes a decoder”, where the first set of data is obtained from the training data) and the second set of data includes less data than the frames of image data. (Par 47 “ the set of updated parameters can be used to encode the input data … the bitstream can also include a compressed version of the one or more subspace coordinates that correspond to the set of updated model parameters. In some cases, the receiving device can receive the bitstream.”, where the second set of data is obtained from the input data) Brehmer fails to explicitly teach having one or more layers with fixed weights implementing the pre-trained diffusion network. In related field of endeavor, Kumari teaches a neural network having one or more layers with fixed weights implementing the pre-trained diffusion network (Par 6 “training the diffusion model to generate a synthetic image based on text condition features by fine-tuning the first subset of parameters of the diffusion model based on a second training set different from the first training set during a second training phase, wherein the second subset of parameters of the diffusion model are held fixed during the second training phase”, Par 85 “In some cases, the median change is maximum for parameters in the attention layers. Accordingly, embodiments of the present application allow changes for parameters in the attention layers, and may hold other parameters fixed.”) It would have been obvious to one of ordinary skill in the art prior to the effective filing date to have modified Brehmer to include having one or more layers with fixed weights implementing the pre-trained diffusion network as taught by Kumari. Doing so would reduce the number of matrices to be stored after updating (Par 24 “by adjusting the projection matrices of the attention block(s) and holding other parameters fixed during training, only these matrices need to be stored after updating, which corresponds to ˜75 MB of data, or 10-15 MB after compression”). Regarding claim 2, Brehmer as modified by Kumari teaches The method of claim 1. Brehmer further teaches wherein the first set of data is characterized by first dimensions and the training image data is characterized by second dimensions and wherein the first dimensions are smaller than the second dimensions (Par 45 “training of the machine learning system can include determining a subspace or manifold of model parameters having a lower dimension than the full parameter space”, Par 162 “network training 704 can include a principal component analysis (PCA) algorithm that can be used to determine the subspace M of network parameters. For instance, PCA can be used to determine one or more directions and/or trajectories in the full parameter space in which model parameters performed well during network training 704 (e.g., based on loss function). In some examples, PCA can be used to reduce the dimensionality of the parameter space into principal components”, where the first dimensions are the manifold of model parameters having a lower dimension, and the second dimensions are the full parameter space.) Regarding claim 3, Brehmer as modified by Kumari teaches The method of claim 1. Brehmer further teaches wherein the second (Par 153 “In some examples, the systems and techniques disclosed herein can reduce the bitrate and/or file size for sending network parameter updates (e.g., fine-tuning neural network) to a decoder by selecting the fine-tuned weight vectors from a lower-dimensional subspace.”) Regarding claim 4, Brehmer as modified by Kumari teaches The method of claim 1. Brehmer further teaches wherein the values of the set of fine-tuning weights correspond to low-rank adaptation parameter (LoRA) values (Par 192 “For example, an arithmetic encoder (e.g., arithmetic encoder 608) can be used to entropy-code the compressed model update (e.g., subspace update 906) and the compressed latent variables into bitstream”, Par 195 “In some examples, the set of updated model parameters can correspond to updated parameter point 908 which can be equivalent to the global parameter point 904+(subspace matrix 902*subspace update 906)”, where subspace matrix 902 and subspace update 906 are LoRA values for fine tuning the model weights.) Regarding claim 5, Brehmer as modified by Kumari teaches The method of claim 4. Brehmer further teaches wherein the first set of data corresponds to one of compressed representations of the training frames of training image data and sparse representations of the training frames of training image data (Par 155 “In some cases, network training 704 can include an iterative flow of training data 702 through neural network compression system 700 (e.g., using backpropagation training techniques). In some aspects, the parameters (e.g., weights, biases, etc.) for the trained neural network compression system 700 can be referred to as the global model parameters.”, Par 163 “a sparse PCA can be used to reduce the size of the subspace M of network parameters (e.g., by applying a sparsity constraint to the input variables)”). Regarding claim 9, Brehmer teaches A computer-implemented method (Par 216 “Processes and methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer-readable media.”), comprising: generating a set of fine-tuning weights for a pre-trained diffusion model (Par 45 “the subspace or manifold of model parameters includes a portion of the weight vectors”, Par 47 “In some cases, the bitstream can also include a compressed version of the one or more subspace coordinates that correspond to the set of updated model parameters. In some cases, the receiving device can receive the bitstream”), the generating including training a first artificial neural network using training frames of training image data (Par 52 “The weight values may initially be determined by an iterative flow of training data”, Par 53 “The training input and output can include images as an illustrative example”) in combination with a first set of data derived from the training frames of training image data (Par 190 “the process 1300 can include generating a set of global model parameters based on a training dataset … In some cases, the training dataset may correspond to training data”) wherein the first set of data includes less data than the training frames of training image data (Par 46 “the machine learning system can be fine-tuned (e.g., trained, fitted, etc.) for an instance of input data (e.g., an image, a video, a portion of a video, three-dimensional (3D) data, etc.) that is to be compressed and transmitted to a receiving device that includes a decoder”), the first artificial neural network having … at least one trainable layer including the fine-tuning weights where the values of the fine-tuning weights are adjusted during the training (Par 55 “These multi-layered architectures may be trained one layer at a time and may be fine-tuned using back propagation.”, Par 52 “The weight values may initially be determined by an iterative flow of training data”, Par 46 “the machine learning system can be fine-tuned (e.g., trained, fitted, etc.) for an instance of input data (e.g., an image, a video, a portion of a video, three-dimensional (3D) data, etc.) that is to be compressed and transmitted”, where the transmitting device implements the first artificial neural network.); sending the values of the fine-tuning weights to a computing device configured to establish a specialized diffusion model by inserting the values of the fine-tuning weights into an adaptable layer of a second artificial neural network having fixed-weight layers implementing the pre-trained diffusion model (Par 46 “For example, fine-tuning of the neural network can include selecting a set of updated model parameters that correspond to a weight vector”, Par 47 “The decoder of the receiving device can use the one or more subspace coordinates to determine the updated model parameters for the neural network. A machine learning system (e.g., a neural network, such as an RD-AE or other neural network) of the decoder can use the updated model parameters to decode the compressed input data.”, Par 55 “These multi-layered architectures may be trained one layer at a time and may be fine-tuned using back propagation.”, Par 52 “The weight values may initially be determined by an iterative flow of training data”, where the receiving device implements the second artificial neural network); deriving a second set of data from frames of image data containing at least scene information present in the training frames of training data wherein the second set of data includes less data than the frames of image data (Par 61 “a device using the compression and/or decompression techniques described can compress one or more images efficiently using the machine learning based techniques”, Par 59 “the systems and techniques described herein can perform compression and/or decompression of video data. As used herein, the term “image” and “frame” are used interchangeably, referring to a standalone image or frame (e.g., a photograph) or a group or sequence of images or frames (e.g., making up a video or other sequence of images/frames)”); sending the second set of data to the computing device wherein the second artificial neural network is configured to generate images corresponding to the frames of image data using the second set of data (Par 61 “can transmit the compressed one or more images to a receiving device, and the receiving device can decompress the one or more compressed images efficiently using the machine learning based techniques described herein.”). Regarding claim 9, Brehmer fails to explicitly teach having one or more layers with fixed rates implementing the pre-trained diffusion model. In related field of endeavor, Kumari teaches a neural network having one or more layers with fixed weights implementing the pre-trained diffusion network (Par 6 “training the diffusion model to generate a synthetic image based on text condition features by fine-tuning the first subset of parameters of the diffusion model based on a second training set different from the first training set during a second training phase, wherein the second subset of parameters of the diffusion model are held fixed during the second training phase”, Par 85 “In some cases, the median change is maximum for parameters in the attention layers. Accordingly, embodiments of the present application allow changes for parameters in the attention layers, and may hold other parameters fixed.”) It would have been obvious to one of ordinary skill in the art prior to the effective filing date to have modified Brehmer to include having one or more layers with fixed weights implementing the pre-trained diffusion network as taught by Kumari. Doing so would reduce the number of matrices to be stored after updating (Par 24 “by adjusting the projection matrices of the attention block(s) and holding other parameters fixed during training, only these matrices need to be stored after updating, which corresponds to ˜75 MB of data, or 10-15 MB after compression”). Regarding claim 10, Brehmer teaches the method of claim 9 wherein the first set of data is characterized by first dimensions and the training image data is characterized by second dimensions and wherein the first dimensions are smaller than the second dimensions (Par 45 “training of the machine learning system can include determining a subspace or manifold of model parameters having a lower dimension than the full parameter space”). Regarding claim 11, Brehmer teaches the method of claim 9 wherein the second (Par 153 “In some examples, the systems and techniques disclosed herein can reduce the bitrate and/or file size for sending network parameter updates (e.g., fine-tuning neural network) to a decoder by selecting the fine-tuned weight vectors from a lower-dimensional subspace.”, where the input image has higher dimension than the updates sent over the network.) Regarding claim 12, Brehmer teaches the method of claim 9 wherein the values of the set of fine-tuning weights correspond to low-rank adaptation (LoRA) parameter values. (Par 192 “For example, an arithmetic encoder (e.g., arithmetic encoder 608) can be used to entropy-code the compressed model update (e.g., subspace update 906) and the compressed latent variables into bitstream”, Par 195 “In some examples, the set of updated model parameters can correspond to updated parameter point 908 which can be equivalent to the global parameter point 904+(subspace matrix 902*subspace update 906)”, where subspace matrix 902 and subspace update 906 are LoRA values for fine tuning the model weights.) Regarding claim 13, Brehmer teaches the method of claim 12 wherein the first set of data corresponds to one of one of compressed representations of the training frames of training image data and sparse representations of the training frames of training image data (Par 155 “In some cases, network training 704 can include an iterative flow of training data 702 through neural network compression system 700 (e.g., using backpropagation training techniques). In some aspects, the parameters (e.g., weights, biases, etc.) for the trained neural network compression system 700 can be referred to as the global model parameters.”, Par 163 “a sparse PCA can be used to reduce the size of the subspace M of network parameters (e.g., by applying a sparsity constraint to the input variables)”). 12. Claims 6-8 and 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Brehmer as modified by Kumari as applied to claims 1 and 9 above, and further in view of US 20220139108 A1 (Gupta et al, hereinafter Gupta). Regarding claim 6, Brehmer and Kumari teaches The method of claim 1. Brehmer and Kumari fails to explicitly teach wherein the frames of training image data include a face. In related field of endeavor, Gupta teaches wherein the frames of training image data include a face (Par 51 “a training model has been trained on placement of landmarks within facial image data.”) It would have been obvious to one of ordinary skill in the art prior to the effective filing date to have modified Brehmer to include wherein the frames of training image data include a face as taught by Gupta. Doing so would allow detecting facial information belonging to subjects within an image (Par 5 “detecting facial information belonging to passive (or non-active) subjects within an image in order to provide functionality directed to that facial information in image processing.”) Regarding claim 7, Brehmer as modified by Kumari and further modified by Gupta as described above teaches the method of claim 6 wherein the first set of data includes a first set of three dimensional coordinate locations corresponding to facial landmarks of the face. Gupta teaches wherein the first set of data includes a first set of three dimensional coordinate locations corresponding to facial landmarks of the face (Par 51 “a training model has been trained on placement of landmarks within facial image data.”, Par 41 “A depth sensor may include any device configured to obtain information related to a range or distance between an object (i.e., features on a face) and the depth sensor. The depth sensor can generate a range image or depth map based on received depth information. For the purposes of this application, depth information (e.g., a range map) may be included in image information.”, where the depth sensor provides a third dimension.) It would have been obvious to one of ordinary skill in the art prior to the effective filing date to have modified Brehmer to include wherein the first set of data includes a first set of three dimensional coordinate locations corresponding to facial landmarks of the face as taught by Gupta. Doing so would provide functionality directed to that facial information in image processing (Par 5 “detecting facial information belonging to passive (or non-active) subjects within an image in order to provide functionality directed to that facial information in image processing.”) Regarding claim 8, Brehmer as modified by Kumari and further modified by Gupta as described above teaches The method of claim 7 wherein the second set of data include a second set of three dimensional coordinate locations corresponding to the facial landmarks of the face wherein the second set of coordinate locations are different from the first set of three dimensional coordinate locations. Gupta teaches wherein the second set of data include a second set of three dimensional coordinate locations corresponding to the facial landmarks of the face wherein the second set of coordinate locations are different from the first set of three dimensional coordinate locations (Par 68 “identifying a second set of facial landmark locations for the subject at the second point in time, and calculating a difference in a first relationship between the facial landmark locations in the first set of facial landmark locations and a second relationship between the facial landmark locations in the second set of facial landmark locations.”) It would have been obvious to one of ordinary skill in the art prior to the effective filing date to have modified Brehmer to include wherein the second set of data include a second set of three dimensional coordinate locations corresponding to the facial landmarks of the face wherein the second set of coordinate locations are different from the first set of three dimensional coordinate locations as taught by Gupta. Doing so would provide functionality directed to that facial information in image processing (Par 5 “detecting facial information belonging to passive (or non-active) subjects within an image in order to provide functionality directed to that facial information in image processing.”) Regarding claim 14, Brehmer as modified by Kumari teaches the method of claim 9. Brehmer and Kumari fails to explicitly teach wherein the frames of training image data include a face. Gupta teaches wherein the frames of training image data include a face (Par 51 “a training model has been trained on placement of landmarks within facial image data.”) It would have been obvious to one of ordinary skill in the art prior to the effective filing date to have modified Brehmer to include wherein the frames of training image data include a face as taught by Gupta. Doing so would allow detecting facial information belonging to subjects within an image (Par 5 “detecting facial information belonging to passive (or non-active) subjects within an image in order to provide functionality directed to that facial information in image processing.”) Regarding claim 15, Brehmer as modified by Kumari and further modified by Gupta as described above teaches the method of claim 14 wherein the first set of data includes a first set of three dimensional coordinate locations corresponding to facial landmarks of the face. Gupta teaches wherein the first set of data includes a first set of three dimensional coordinate locations corresponding to facial landmarks of the face (Par 51 “a training model has been trained on placement of landmarks within facial image data.”, Par 41 “A depth sensor may include any device configured to obtain information related to a range or distance between an object (i.e., features on a face) and the depth sensor. The depth sensor can generate a range image or depth map based on received depth information. For the purposes of this application, depth information (e.g., a range map) may be included in image information.”, where the depth sensor provides a third dimension.) It would have been obvious to one of ordinary skill in the art prior to the effective filing date to have modified Brehmer to include wherein the first set of data includes a first set of three dimensional coordinate locations corresponding to facial landmarks of the face as taught by Gupta. Doing so would provide functionality directed to that facial information in image processing (Par 5 “detecting facial information belonging to passive (or non-active) subjects within an image in order to provide functionality directed to that facial information in image processing.”) Regarding claim 16, Brehmer as modified by Kumari and further modified by Gupta as described above teaches the method of claim 15 wherein the second set of data include a second set of three dimensional coordinate locations corresponding to the facial landmarks of the face wherein the second set of coordinate locations are different from the first set of three dimensional coordinate locations. Gupta teaches wherein the second set of data include a second set of three dimensional coordinate locations corresponding to the facial landmarks of the face wherein the second set of coordinate locations are different from the first set of three dimensional coordinate locations (Par 68 “identifying a second set of facial landmark locations for the subject at the second point in time, and calculating a difference in a first relationship between the facial landmark locations in the first set of facial landmark locations and a second relationship between the facial landmark locations in the second set of facial landmark locations.”) It would have been obvious to one of ordinary skill in the art prior to the effective filing date to have modified Brehmer to include wherein the second set of data include a second set of three dimensional coordinate locations corresponding to the facial landmarks of the face wherein the second set of coordinate locations are different from the first set of three dimensional coordinate locations as taught by Gupta. Doing so would provide functionality directed to that facial information in image processing (Par 5 “detecting facial information belonging to passive (or non-active) subjects within an image in order to provide functionality directed to that facial information in image processing.”) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN PATRICK GOCO whose telephone number is (571)272-5872. The examiner can normally be reached M-Th, 7:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOHN P GOCO/ Examiner, Art Unit 2611 /KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Aug 29, 2024
Application Filed
Mar 17, 2026
Non-Final Rejection — §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month