Prosecution Insights
Last updated: April 19, 2026
Application No. 18/785,692

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM

Non-Final OA §103
Filed
Jul 26, 2024
Examiner
MCCOY, AIDAN WILLIAM
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Preferred Networks Inc.
OA Round
1 (Non-Final)
50%
Grant Probability
Moderate
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
1 granted / 2 resolved
-12.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
25 currently pending
Career history
27
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
52.9%
+12.9% vs TC avg
§102
15.9%
-24.1% vs TC avg
§112
22.4%
-17.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6, 8, 9, 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jin (US 12,182,911 B2) in view of S. Dolgikh, "Native Concept Frameworks in Unsupervised Generative Learning," 2021 11th International Conference on Advanced Computer Information Technologies (ACIT), Deggendorf, Germany, 2021, pp. 748-752 (hereinafter “Dolgikh”). Regarding claim 1, Jin teaches An image processing device comprising: one or more storage devices (col. 12 lines 9-14); and one or more processors (col. 8 lines 35-38), wherein the one or more processors are configured to: create a first image by inputting a first latent variable into a first generative model (abstract, summary, claim 1, figs 1, 5 & 6 , col 3 lines 1-12); store the first latent variable in the one or more storage devices in association with identification information (col. 8 lines 59-64, figs 16 & 17); acquire the first latent variable and the identification associated with the first latent variable from the one or more storage devices (fig. 14 & 17, col. 11 lines 48-50); generate a second latent variable based on the first latent variable (fig. 9 & 14, col. 3, lines 23-30); create a second image by inputting the second latent variable into the first generative model (abstract, summary, claim 1, fig. 10 & 14, col 3 lines 1-12, col. 8 lines 59-64); and store the second latent variable in the one or more storage devices with the identification information (col. 8 lines 59-64, fig 17), and wherein the second image is different from the first image and includes at least a second object different from a first object included in the first image (claim 1, col. 3 line 62 - col. 4 line 24). Jin fails to teach latent variable in association with the identification information of the first generative model However Dolgikh teaches latent variable in association with the identification information of the first generative model (section I “introduction” – “These results demonstrated that structure that emerges in the latent representations created by models of generative learning in the process of unsupervised self-learning with minimization of generative error can have intrinsic associations with characteristics patterns in the observable data and perhaps, can be used as a foundation for learning methods and processes that use these associations for improved efficiency”, section III “results” – “latent representations created by generative models are specific to each individual learning model that can be explained by the peculiarities of the training process”). Dolgikh describes a framework for an association of latent representations and models of generative learning. While Dolgikh does not directly describe an identification system of latent variables and generative models, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Dolgikh with the stored identification information of latent variables present in Jin and include the associated generative models in that identification information. Dolgikh is considered analogous to the claimed invention as it is in the same field of generative machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Jin and Dolgikh to improve the efficiency of generative models. Regarding claim 2, Jin in view of Dolgikh teaches the image processing device of claim 1. Jin further teaches wherein the second object is obtained by changing at least one of an attribute or a posture of the first object (col. 7 lines 50-55, col. 4 lines 4-63, fig 7 & 8). Regarding claim 3, Jin in view of Dolgikh teaches the image processing device of claim 1. Jin further teaches wherein the one or more processors generate the second latent variable by fusing the first latent variable and a third latent variable (col. 4 lines 25-38, lines 42-47), and wherein the first object and a third object included in a third image are fused in the second object (claim 1, col. 4 lines 4-9), the third image being generated by inputting the third latent variable into the first generative model, and wherein the third latent variable is stored in the one or more storage devices in association with the first generative model (col. 4 lines 42-63). Jin clearly teaches that “latent information fusion unit 120 fuses the latent information of the two images to generate fusion latent information (see column 4, lines 25-28).” Given the broadest reasonable interpretation consistent with the disclosure, this reads upon the first and third object fused into the second object as currently claimed. Additionally Jin describes to-be-fused images as “three or more different images”, given the broadest reasonable interpretation, this teaches the use of a third object. The combination of a first and second object do not have a distinctly different process than the first and third object combination. Regarding claim 4, Jin in view of Dolgikh teaches the image processing device of claim 1. Jin further teaches wherein the one or more processors are further configured to generate the first latent variable by using another image different from the first image (col. 4 lines 9-24, lines 42-47). Jin describes generating latent variables of two different images, suggesting the same for three or more different images. The use of generating latent information from the second image or three or more different images, is analogous to generating a latent variable using an image different from the first image. The designation of the latent variable as the “first” latent variable is trivial as the manner in which it is generated and used would be the same as that which is described for a latent variable of a second image or one of the three or more different images. Regarding claim 5, Jin in view of Dolgikh teaches the image processing device of claim 4. Jin further teaches wherein the one or more processors generate the first latent variable by using at least one of an encoder model or the first generative model, and the another image (col. 3 line 62 - col. 4 line 3 – “The latent information used herein may be information that may be put into a latent variable inferred from observation data (for example, image data) through a model or the like”). Regarding claim 6, Jin in view of Dolgikh teaches the image processing device of claim 1. Jin further teaches wherein the one or more processors are further configured to generate the first latent variable by fusing a fourth latent variable and a fifth latent variable (col. 4 lines 25-38, lines 42-47), wherein a fourth object included in a fourth image and a fifth object included in a fifth image are fused in the first object (claim 1, col. 4 lines 4-9, lines 42-47), the fourth image being generated by inputting the fourth latent variable into the first generative model (col. 4 lines 42-63), and the fifth image being generated by inputting the fifth latent variable into the first generative model (col. 4 lines 42-63), wherein the fourth latent variable is stored in the one or more storage devices in association with the first generative model (col. 8 lines 61-62, col. 12 lines 9-14, col. 4 lines 42-47), and wherein the fifth latent variable is stored in the one or more storage devices in association with the first generative model (col. 8 lines 61-62, col. 12 lines 9-14, col. 4 lines 42-47). Under broadest reasonable interpretation, Jin implies the process of using a fourth and fifth variable from a fourth and fifth image, suggesting the use of “three or more different images”. Additionally the process of using a fourth and fifth image would be the same as using two different latent variables from two different images to create a fused latent variable. Regarding claim 8, Jin teaches an image processing device comprising: one or more storage devices; and one or more processors, wherein the one or more processors are configured to: display, on a display device, a process selection screen on which at least a start of first image processing and a start of second image processing are selectable (figs 7, 8, 18, col. 4 lines 39-47, col. 11 lines 32-37, latent information fusion, image fusion/image generation); start the first image processing based on an instruction of a user and create a first image by using a first generative model (abstract, summary, claim 1, figs 1, 5 & 6 , col 3 lines 1-12); store, in the one or more storage devices, a first latent variable used to create the first image (col. 8 lines 59-64, figs 16 & 17); based on an instruction of the user (figs 7, 8 & 18, col. 3 lines 31-38, col 8 lines 41-47, lines 62-64); start the second image processing based on an instruction of the user and create a second image by using the first generative model (abstract, summary, claim 1, fig. 10 & 14, col 3 lines 1-12, col. 8 lines 59-64); and store, in the one or more storage devices, a second latent variable used to create the second image (col. 8 lines 59-64, fig 17), based on an instruction of the user (figs 7, 8 & 18), wherein the second latent variable is generated based on the first latent variable (fig. 9 & 14, col. 3, lines 23-30), and wherein the first image processing and the second image processing are different (col. 4 lines 25-27, lines 48-50, col. 11 lines 28-37 – latent information fusion and image fusion/image generation are different). Jin fails to teach a latent variable in association with the first generative model However Dolgikh teaches latent variable in association with the identification information of the first generative model (section I “introduction” – “These results demonstrated that structure that emerges in the latent representations created by models of generative learning in the process of unsupervised self-learning with minimization of generative error can have intrinsic associations with characteristics patterns in the observable data and perhaps, can be used as a foundation for learning methods and processes that use these associations for improved efficiency”, section III “results” – “latent representations created by generative models are specific to each individual learning model that can be explained by the peculiarities of the training process”). Dolgikh describes a framework for an association of latent representations and models of generative learning. While Dolgikh does not directly describe an identification system of latent variables and generative models, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Dolgikh with the stored identification information of latent variables present in Jin and include the associated generative models in that identification information. Dolgikh is considered analogous to the claimed invention as it is in the same field of generative machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Jin and Dolgikh to improve the efficiency of generative models. Regarding claim 9, Jin in view of Dolgikh teaches The image processing device as claimed in claim 8. Jin further teaches wherein the first image processing and the second image processing are any of image creation processing, image fusion processing, attribute adjustment processing, posture change processing, and latent variable generation processing (col. 7 lines 50-55, col. 4 lines 4-63, fig 7 & 8). Regarding claim 16, An image processing device comprising: one or more storage devices (col. 12 lines 4-43); and one or more processors (col. 12 lines 38-67), wherein the one or more processors are configured to: select a first image based on an instruction of a user (col. 4 lines 10-12); display, on a display device, a plurality of images generated by using a generative model that is same as a generative model of the first image (col. 4 lines 59-63, col. 13 lines 62-63, claims 12 & 20); select a second image from among the plurality of images based on an instruction of the user (col. 4 lines 10-12); fuse a latent variable of the first image and a latent variable of the second image to generate a fused latent variable (col. 3, lines 23-30); input the fused latent variable into the generative model to generate a fused image (col. 3 lines 1-6); and store the fused latent variable in the one or more storage devices (col. 10 lines 60-61). Jin fails to teach latent variable in association with the identification information of the first generative model However Dolgikh teaches latent variable in association with the identification information of the first generative model (section I “introduction” – “These results demonstrated that structure that emerges in the latent representations created by models of generative learning in the process of unsupervised self-learning with minimization of generative error can have intrinsic associations with characteristics patterns in the observable data and perhaps, can be used as a foundation for learning methods and processes that use these associations for improved efficiency”, section III “results” – “latent representations created by generative models are specific to each individual learning model that can be explained by the peculiarities of the training process”). Dolgikh describes a framework for an association of latent representations and models of generative learning. While Dolgikh does not directly describe an identification system of latent variables and generative models, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Dolgikh with the stored identification information of latent variables present in Jin and include the associated generative models in that identification information. Dolgikh is considered analogous to the claimed invention as it is in the same field of generative machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Jin and Dolgikh to improve the efficiency of generative models. Regarding claim 17, An image processing method comprising: displaying, by one or more processors, on a display device, a process selection screen on which at least a start of first image processing and a start of second image processing are selectable (figs 7, 8, 18, col. 4 lines 39-47, col. 7 lines 8-19, lines 62-64, col. 11 lines 32-37, latent information fusion, image fusion/image generation); starting, by the one or more processors, the first image processing based on an instruction of a user and creating a first image by using a first generative model (abstract, summary, claim 1, figs 1, 5 & 6 , col 3 lines 1-12); storing, by the one or more processors, in one or more storage devices, a first latent variable used to create the first (col. 8 lines 59-64, figs 16 & 17) based on an instruction of the user (figs 7, 8 & 18); starting, by the one or more processors, the second image processing based on an instruction of the user and creating a second image by using the first generative model (abstract, summary, claim 1, fig. 10 & 14, col 3 lines 1-12, col. 8 lines 59-64); and storing, by the one or more processors, in the one or more storage devices, a second latent variable used to create the second image in association with the first generative model based on an instruction of the user (col. 8 lines 59-64, fig 17), wherein the second latent variable is generated based on the first latent variable (fig. 9 & 14, col. 3, lines 23-30), and wherein the first image processing and the second image processing are different (col. 4 lines 25-27, lines 48-50, col. 11 lines 28-37 – latent information fusion and image fusion/image generation are different). Jin fails to teach a latent variable in association with the first generative model. Jin fails to teach a latent variable in association with the first generative model. However Dolgikh teaches a latent variable in association with the first generative model. (section I “introduction” – “These results demonstrated that structure that emerges in the latent representations created by models of generative learning in the process of unsupervised self-learning with minimization of generative error can have intrinsic associations with characteristics patterns in the observable data and perhaps, can be used as a foundation for learning methods and processes that use these associations for improved efficiency”, section III “results” – “latent representations created by generative models are specific to each individual learning model that can be explained by the peculiarities of the training process”). Dolgikh describes a framework for an association of latent representations and models of generative learning. While Dolgikh does not directly describe an identification system of latent variables and generative models, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Dolgikh with the stored identification information of latent variables present in Jin and include the associated generative models in that identification information. Dolgikh is considered analogous to the claimed invention as it is in the same field of generative machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Jin and Dolgikh to improve the efficiency of generative models. Regarding claim 18, The image processing device as claimed in claim 1, wherein the first latent variable includes at least one of a value sampled from a probability distribution, code information, attribute information, noise, gene information, or posture information (col. 2 lines 61-67). Regarding claim 19, The image processing device as claimed in claim 8, wherein the first latent variable includes at least one of a value sampled from a probability distribution, code information, attribute information, noise, gene information, or posture information (col. 2 lines 61-67). Regarding claim 20, The image processing device as claimed in claim 16, wherein the latent variable of the first image includes at least one of a value sampled from a probability distribution, code information, attribute information, noise, gene information, or posture information (col. 2 lines 61-67). Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jin in view of Dolgikh and in further view of Liao (TW M629679 U). Regarding claim 7, Jin in view of Dolgikh teaches the image processing device of claim 1. Jin further teaches wherein the one or more processors perform image processing by using the first generative model based on an instruction of a user (col. 4 lines 4-56). Jin in view of Dolgikh fails to teach wherein the one or more storage devices store at least the first generative model and a second generative model. However Liao teaches wherein the one or more storage devices store at least the first generative model and a second generative model (page 2 paragraph 3 – “a storage unit that stores a plurality of generated models”). Liao is considered analogous to the claimed invention as it is in the same field of machine learning and image processing. Therefore it would have been obvious to one of ordinary skill in the art to combine the teachings of Liao with Jin in view of Dolgikh to implement storage of multiple generative models. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jin in view of Dolgikh and in further view of Okada (US 20200242773 A1). Regarding claim 10, Jin in view of Dolgikh teaches The image processing device as claimed in claim 9. Jin further teaches wherein the first image processing is the image creation processing (col. 11 lines 32-35), Jin in view of Dolgikh fails to teach wherein a start button for the first image processing is displayed at a leftmost and uppermost position in the process selection screen, in comparison with a start button for another image processing. However, Okada teaches wherein a start button for the first image processing is displayed at a leftmost and uppermost position (Fig 4A & 5, paragraph [0083]) in the process selection screen, in comparison with a start button for another image processing (Fig 4A). Okada describes a start button, a location of an upper-left corner and a type of image processing button’s location in comparison to another. While Okada does not directly describe a “start” button in an upper-left corner, it does describe a “scan” button in an upper-left corner relative to “copy” and “fax” buttons, all of which can be considered analogous to image processing buttons. Additionally, Okada explicitly describes a start button and an upper-left corner. Okada is considered analogous to the claimed invention as it is in the same field of image processing. Therefore it would have been obvious to one of ordinary skill in the art to utilize the teachings of Okada with Jin in view of Dolgikh in order to specify implementation of a user interface. Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jin in view of Dolgikh and in further view of Jensen (US 20190280792 A1). Regarding claim 11, Jin in view of Dolgikh teaches the image processing device as claimed in claim 9. Jin further teaches wherein the second image processing is one of attribute adjustment processing (figs 7, 8 & 10 attributes, col. 4 lines 39-40), wherein the one or more storage devices further store identification information of the user (col. 3 lines 32-38, col. 12 lines 9-14 – user logging in implies storing identifying information of a user) in a case where at least one of the second image or the second latent variable is stored in the one or more storage devices based on an instruction of the user (col. 8 lines 61-62, col. 12 lines 9-14 – fused image and latent information are stored based on instruction of the user, would be obvious to do the same with the second image or second latent variable). Jin in view of Dolgikh fail to teach points given to the user in association with each other, and wherein the one or more processors are configured to subtract a predetermined number of points from the points. However Jensen teaches points given to the user in association with each other, and wherein the one or more processors are configured to subtract a predetermined number of points from the points (paragraphs [0020], [0022]). Jensen describes a system which allows a user to purchase video streaming content, wherein each time a user requests video, a predetermined number of tokens corresponding to a length of time is removed from the user’s account. A user can do this for a plurality of videos. Jensen is considered analogous to the claimed invention as it is in the same field of image processing. Therefore it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Jensen with the teachings of Jin in view of Dolgikh to implement a system of monetization such as that of Jenson. Claim(s) 12-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jin in view of Dolgikh and in further view of Jensen and VanDeburg (US 20130144700 A1). Regarding claim 12, Jin view of Dolgikh teaches The image processing device as claimed in claim 9. Jin further teaches wherein the first image processing is one of the image creation processing or the image fusion processing (col. 4 lines 39-50, col. 11 lines 32-37), wherein the second image processing is one of the attribute adjustment processing or the posture change processing (figs 7, 8 & 10 attributes, col. 4 lines 39-40), Jin in view of Dolgikh fail to teach wherein the one or more storage devices further store identification information of the user and points given to the user in association with each other, wherein the one or more processors are configured to: subtract a predetermined number of points from the points in response to the first image being generated; and subtracting a predetermined number of points from the points in response to the second image being generated, However Jensen teaches wherein the one or more storage devices further store identification information of the user and points given to the user in association with each other (paragraphs [0020], [0022]), wherein the one or more processors are configured to: subtract a predetermined number of points from the points in response to the first image being generated; and subtracting a predetermined number of points from the points in response to the second image being generated (paragraph [0022]). Jensen describes a system which allows a user to purchase video streaming content, wherein each time a user requests video, a predetermined number of tokens corresponding to a length of time is removed from the user’s account. A user can do this for a plurality of videos. Jensen is considered analogous to the claimed invention as it is in the same field of image processing. Therefore it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Jensen with the teachings of Jin in view of Dolgikh to implement a system of monetization such as that of Jenson. Jin in view of Dolgikh and Jensen fail to teach wherein the predetermined number of points subtracted in response to the second image being generated is less than the predetermined number of points subtracted in response to the first image being generated. However, VanDeburg teaches wherein the predetermined number of points subtracted in response to the second image being generated is less than the predetermined number of points subtracted in response to the first image being generated (paragraph [0003]). VanDeburg describes a buy one get one free system, this is analogous to a second product being cheaper than a second, or a second image using less points than the first. VanDeburg is considered analogous the claimed invention as it is in the same field of computer systems. Therefore it would have been obvious to one of ordinary skill in the art, prior to the effective filing date, to combine the teachings of VanDeburg with the point system of Jensen and the image processing system which includes model identification information of Jin in view of Dolgikh which would yield the predictable result of increasing customer traffic. Regarding claim 13, The image processing device as claimed in claim 12, VanDeburg further teaches wherein the predetermined number of points subtracted in response to the second image being generated is 0 (paragraph [0003]). Regarding claim 14, Jin in view of Dolgikh teaches The image processing device as claimed in claim 9. Jin further teaches wherein the first image processing is one of the image creation processing, the attribute adjustment processing, the posture change processing, or the latent variable generation processing (col. 4 lines 4-38 latent information fusion is latent variable generation processing), wherein the second image processing is the image fusion processing (col. 4 lines 48-56), wherein the one or more storage devices further store identification information of the user (col. 3 lines 32-38, col. 12 lines 9-14 – user logging in implies storing identifying information of a user) first image being generated (col. 3 lines 1-6, col. 8 lines 59-64); and second image being generated (col 3. Lines 9-13 – implies generation of a second images, “a variety of unique fusion images” is definitively more than one image) Jin in view of Dolgikh fails to teach wherein the one or more storage devices further store identification information of the user and points given to the user in association with each other, wherein the one or more processors are configured to: subtract a predetermined number of points from the points in response to the first image being generated; and subtract a predetermined number of points from the points in response to the second image being generated, and wherein the predetermined number of points subtracted in response to the first image being generated is less than the predetermined number of points subtracted in response to the second image being generated However, Jensen teaches and points given to the user in association with each other (paragraphs [0020], [0022]), wherein the one or more processors are configured to: subtract a predetermined number of points from the points in response to the first image being generated; and subtracting a predetermined number of points from the points in response to the second image being generated (paragraph [0022]). The motivation to combine Jensen with Jin in view of Dolgikh would have been the same as that as claim 11. Jin in view of Dolgikh and Jensen fails to teach wherein the predetermined number of points subtracted in response to the first image being generated is less than the predetermined number of points subtracted in response to the second image being generated. However, VanDeburg wherein the predetermined number of points subtracted in response to the first image being generated is less than the predetermined number of points subtracted in response to the second image being generated (paragraph [0003]). The motivation to combine VanDeburg with Jin in view of Dolgikh and Jensen would have been the same as that of claim 12. Regarding claim 15, Jin in view of Dolgikh and Jensen teaches The image processing device as claimed in claim 14, VanDeburg further teaches wherein the predetermined number of points subtracted in response to the second image being generated is 0 (paragraph [0003]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kumar (US 2022/0270310 A1) Zhang, C., Yang, Z., He, X., & Deng, L. (2020). Multimodal Intelligence: Representation Learning, Information Fusion, and applications. IEEE Journal of Selected Topics in Signal Processing, 14(3), 478–493. https://doi.org/10.1109/jstsp.2020.2987728 Hahn, S., & Choi, H. (2019). Disentangling latent factors of variational auto-encoder with whitening. Lecture Notes in Computer Science, 590–603. https://doi.org/10.1007/978-3-030-30508-6_47 T. Kaneko, K. Hiramatsu and K. Kashino (2017), "Generative Attribute Controller with Conditional Filtered Generative Adversarial Networks," 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, pp. 7006-7015, doi: 10.1109/CVPR.2017.741. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Aidan W McCoy whose telephone number is (571)272-5935. The examiner can normally be reached 8:00 AM-5:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571)272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AIDAN W MCCOY/Examiner, Art Unit 2611 /TAMMY PAIGE GODDARD/Supervisory Patent Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jul 26, 2024
Application Filed
Mar 17, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+100.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month