Prosecution Insights
Last updated: April 19, 2026
Application No. 18/700,985

BIOLOGICAL IMAGE PROCESSING PROGRAM, BIOLOGICAL IMAGE PROCESSING APPARATUS, AND BIOLOGICAL IMAGE PROCESSING METHOD

Non-Final OA §103
Filed
Apr 12, 2024
Examiner
KOPPOLU, VAISALI RAO
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Tohoku University
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 12m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
89 granted / 113 resolved
+16.8% vs TC avg
Strong +27% interview lift
Without
With
+26.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
22 currently pending
Career history
135
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
49.2%
+9.2% vs TC avg
§102
13.3%
-26.7% vs TC avg
§112
25.5%
-14.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 113 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 14 and 15 are objected to because of the following informalities: Claim 14, preamble has a typo for the dependent claim number. It should read as “The biological image processing method according to claim 11”. Claim 15, preamble has a typo for the dependent claim number. It should read as “The biological image processing method according to claim 11”. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1 – 3, 5 – 8, 10 – 13 and 15 – 18 are rejected under 35 U.S.C. 103 as being unpatentable over Han et al. (See Machine Translation for CN 113592843 A; hereafter referred to as Han) in view of Wang et al. (Wang, C., Wang, Y., Liu, Y., He, Z., He, R., & Sun, Z. (2019). ScleraSegNet: An attention assisted U-Net model for accurate sclera segmentation. IEEE Transactions on Biometrics, Behavior, and Identity Science, 2(1), 40-54; hereafter referred to as Wang). Regarding Claim 1, Han teaches: A non-transitory computer readable storage medium encoded with computer readable biological image processing instruction, which, when executed by processor circuitry, cause the processor circuitry to perform segmentation of a biological image (Han, Abstract, “inputting the retinal blood vessel image into a trained improved U-Net network model for image segmentation”; Han, page 4 detailed ways, “the GPU is GeForce RTX 2080Ti, and the memory is 11GB*2”) to perform: reducing a size of the biological image that has been input and extracting a feature point, by adding an attention block for averaging channels to a convolution block for the biological image (Han, page 6, para 1, “the size of the feature maps is gradually reduced”; Han, page 6, para 2, “extract image-related feature”; Han, page 2, “Step 1: Obtain the image of the retinal blood vessel to be detected; Step 2: Input the retinal blood vessel image into the trained and improved U-Net network model for image segmentation”; Han, page 3, step 3, “On the basis of the full convolutional network U-Net, a NoL-block attention module is added between the encoder and the decoder of the full convolutional network U-Net, and the NoL-block attention is used to capture the image The spatial dependence between pixels with distance in the segmentation”); and outputting information relating to a feature image obtained by segmenting the biological image including the extracted feature point, by contracting the channels in a state in which the attention block has been added to the convolution block (page 3, step 3, “Step 3: Output the segmentation result of the retinal blood vessel image. On the basis of the full convolutional network U-Net, a NoL-block attention module is added between the encoder and the decoder of the full convolutional network U-Net, and the NoL-block attention is used to capture the image The spatial dependence between pixels with distance in the segmentation, the output of the last convolution block of the encoder passes the NoL-block attention module and then is input to the first convolution block of the decoder for uploading sampling”); However, Han fails to explicitly recite: averaging channels to a convolution block for the biological image; and contracting the channels in a state in which the attention block has been added to the convolution block. In the same field of endeavor, Wang teaches: averaging channels to a convolution block for the biological image (Wang, page 45, col. 2, “Channel attention module contains a squeeze block, which takes global average pooling on the feature map P to produce a channel vector Fc, then followed by a excitation block, which uses a multi-layer perceptron (MLP) with one hidden layer to estimate attention across channel from the channel vector Fc”); and contracting the channels in a state in which the attention block has been added to the convolution block (Wang, page 44, col. 1, para 1, “high-resolution features from the contracting path and the upsampled output from the expansive path are concatenated via skip connections for further information fusion and more precise localization”). Han and Wang are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Han with the invention of Wang to make the invention that adds the attention module for averaging channels to a convolution block for the biological image; and contracting the channels in a state in which the attention block has been added to the convolution block by contracting the channels in a state in which the attention block has been added to the convolution block; doing so can improve the segmentation performance (Wang, Abstract); thus one of the ordinary skill the art would have been motivated to combine the references. Regarding Claim 2, Han in view of Wang teaches the non-transitory computer readable storage medium according to claim 1, causing the processor circuitry to perform: extracting the feature point by preferentially reducing a region not including an outer shape or a vascular portion of a segmentation target site included in the biological image (Wang, page 43, col. 1, last para, “these attention modules implicitly increase the weight associated with the target class in feature maps and generate soft region proposals on-the-fly, hence the most discriminative features for sclera segmentation are highlighted and irrelevant features in the non-sclera regions are suppressed”). Regarding Claim 3, Han in view of Wang teaches the non-transitory computer readable storage medium according to claim 1, causing the processor circuitry to perform: expanding the channels to the size of the biological image that has been input and outputting the feature image, after contracting the channels (Wang, page 44, col. 1, “to recover the spatial information lost in pooling layers of the contracting path and meanwhile reduce the number of channels, the expansive path adopts a series of bilinear upsampling operations, followed by two 3×3 convolutional units. Then, high-resolution features from the contracting path and the upsampled output from the expansive path are concatenated via skip connections for further information fusion and more precise localization… Finally, a 1 × 1 convolutional layer and a sigmoid activation function are used to output the probability map of sclera segmentation, which has the same size as the original input”). Regarding Claim 5, Han in view of Wang teaches the non-transitory computer readable storage medium according to claim 1, wherein the biological image is an image of an eyeball (Han, Fig. 5 (a) is an eye map; page 2, step 1: “obtaining the retina blood vessel image to be detected”), and the computer readable biological image processing instructions causes the processor circuitry to perform outputting information relating to the feature image obtained by segmenting a foveal avascular zone of the eyeball (Han, Fig. 5 and 6(e) “the NoL-UNet result of the invention”; Han, page 2, step 3, “step 3: outputting retinal vascular image segmentation result; Wang, page 47, col. 2, “for each image, the corresponding per-pixel annotations of sclera, sclera vascular structures, iris, pupil and periocular region are provided”). Regarding Claim 6, Han teaches: A biological image processing apparatus for performing segmentation of a biological image, the biological image processing apparatus comprising (Han, Summary, “segment retinal capillaries with low contrast information, and proposes an improved U-Net fundus retinal blood vessel image segmentation method and device”): processor circuitry (Han, page 4 detailed ways, “the GPU is GeForce RTX 2080Ti, and the memory is 11GB*2”) configured to: reduce a size of the biological image that has been input and extracting a feature point, by adding an attention block for averaging channels to a convolution block for the biological image (Han, page 6, para 1, “the size of the feature maps is gradually reduced”; page 6, para 2, “extract image-related feature”; Han, page 2, “Step 1: Obtain the image of the retinal blood vessel to be detected; Step 2: Input the retinal blood vessel image into the trained and improved U-Net network model for image segmentation”; Han, page 3, step 3, “On the basis of the full convolutional network U-Net, a NoL-block attention module is added between the encoder and the decoder of the full convolutional network U-Net, and the NoL-block attention is used to capture the image The spatial dependence between pixels with distance in the segmentation”); and output information relating to a feature image obtained by segmenting the biological image including the extracted feature point, by contracting the channels in a state in which the attention block has been added to the convolution block (Han, page 3, step 3, “Step 3: Output the segmentation result of the retinal blood vessel image. On the basis of the full convolutional network U-Net, a NoL-block attention module is added between the encoder and the decoder of the full convolutional network U-Net, and the NoL-block attention is used to capture the image The spatial dependence between pixels with distance in the segmentation, the output of the last convolution block of the encoder passes the NoL-block attention module and then is input to the first convolution block of the decoder for uploading sampling”); However, Han fails to explicitly recite: averaging channels to a convolution block for the biological image; and contracting the channels in a state in which the attention block has been added to the convolution block. In the same field of endeavor, Wang teaches: averaging channels to a convolution block for the biological image (Wang, page 45, col. 2, “Channel attention module contains a squeeze block, which takes global average pooling on the feature map P to produce a channel vector Fc, then followed by a excitation block, which uses a multi-layer perceptron (MLP) with one hidden layer to estimate attention across channel from the channel vector Fc”); and contracting the channels in a state in which the attention block has been added to the convolution block (Wang, page 44, col. 1, para 1, “high-resolution features from the contracting path and the upsampled output from the expansive path are concatenated via skip connections for further information fusion and more precise localization”). Han and Wang are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Han with the invention of Wang to make the invention that adds the attention module for averaging channels to a convolution block for the biological image; and contracting the channels in a state in which the attention block has been added to the convolution block by contracting the channels in a state in which the attention block has been added to the convolution block; doing so can improve the segmentation performance (Wang, Abstract); thus one of the ordinary skill the art would have been motivated to combine the references. Regarding Claim 7, Han in view of Wang teaches the biological image processing apparatus according to claim 6, wherein, the processor circuitry extracts the feature point by preferentially reducing a region not including an outer shape or a vascular portion of a segmentation target site included in the biological image (Wang, page 43, col. 1, last para, “these attention modules implicitly increase the weight associated with the target class in feature maps and generate soft region proposals on-the-fly, hence the most discriminative features for sclera segmentation are highlighted and irrelevant features in the non-sclera regions are suppressed”). Regarding Claim 8, Han in view of Wang teaches the biological image processing apparatus according to claim 6, wherein, the processor circuitry expands the channels to the size of the biological image that has been input and outputting the feature image, after contracting the channels (Wang, page 44, col. 1, “to recover the spatial information lost in pooling layers of the contracting path and meanwhile reduce the number of channels, the expansive path adopts a series of bilinear upsampling operations, followed by two 3×3 convolutional units. Then, high-resolution features from the contracting path and the upsampled output from the expansive path are concatenated via skip connections for further information fusion and more precise localization… Finally, a 1 × 1 convolutional layer and a sigmoid activation function are used to output the probability map of sclera segmentation, which has the same size as the original input”). Regarding Claim 10, Han in view of Wang teaches the biological image processing apparatus according to claim 6, wherein the biological image is an image of an eyeball (Han, Fig. 5 (a) is an eye map; page 2, step 1: “obtaining the retina blood vessel image to be detected”), and the processor circuitry outputs information relating to the feature image obtained by segmenting a foveal avascular zone of the eyeball (Han, Fig. 5 and 6(e) “the NoL-UNet result of the invention”; Han, page 2, step 3, “step 3: outputting retinal vascular image segmentation result; Wang, page 47, col. 2, “for each image, the corresponding per-pixel annotations of sclera, sclera vascular structures, iris, pupil and periocular region are provided”). Regarding Claim 11, Han teaches: A biological image processing method for a computer configured to perform segmentation of a biological image to perform (Han, page 2, Summary, “proposes an improved U-Net fundus retinal blood vessel image segmentation method and device”; Han, page 4 detailed ways, “the GPU is GeForce RTX 2080Ti, and the memory is 11GB*2”) to perform: reducing a size of the biological image that has been input and extracting a feature point, by adding an attention block for averaging channels to a convolution block for the biological image (page 6, para 1, “the size of the feature maps is gradually reduced”; page 6, para 2, “extract image-related feature”; page 2, “Step 1: Obtain the image of the retinal blood vessel to be detected; Step 2: Input the retinal blood vessel image into the trained and improved U-Net network model for image segmentation”; page 3, step 3, “On the basis of the full convolutional network U-Net, a NoL-block attention module is added between the encoder and the decoder of the full convolutional network U-Net, and the NoL-block attention is used to capture the image The spatial dependence between pixels with distance in the segmentation”); and outputting information relating to a feature image obtained by segmenting the biological image including the extracted feature point, by contracting the channels in a state in which the attention block has been added to the convolution block (page 3, step 3, “Step 3: Output the segmentation result of the retinal blood vessel image. On the basis of the full convolutional network U-Net, a NoL-block attention module is added between the encoder and the decoder of the full convolutional network U-Net, and the NoL-block attention is used to capture the image The spatial dependence between pixels with distance in the segmentation, the output of the last convolution block of the encoder passes the NoL-block attention module and then is input to the first convolution block of the decoder for uploading sampling”); However, Han fails to explicitly recite: averaging channels to a convolution block for the biological image; and contracting the channels in a state in which the attention block has been added to the convolution block. In the same field of endeavor, Wang teaches: averaging channels to a convolution block for the biological image (Wang, page 45, col. 2, “Channel attention module contains a squeeze block, which takes global average pooling on the feature map P to produce a channel vector Fc, then followed by a excitation block, which uses a multi-layer perceptron (MLP) with one hidden layer to estimate attention across channel from the channel vector Fc”); and contracting the channels in a state in which the attention block has been added to the convolution block (Wang, page 44, col. 1, para 1, “high-resolution features from the contracting path and the upsampled output from the expansive path are concatenated via skip connections for further information fusion and more precise localization”). Han and Wang are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Han with the invention of Wang to make the invention that adds the attention module for averaging channels to a convolution block for the biological image; and contracting the channels in a state in which the attention block has been added to the convolution block by contracting the channels in a state in which the attention block has been added to the convolution block; doing so can improve the segmentation performance (Wang, Abstract); thus one of the ordinary skill the art would have been motivated to combine the references. Regarding Claim 12, Han in view of Wang teaches the biological image processing method according to claim 11 for the computer to perform: extracting the feature point by preferentially reducing a region not including an outer shape or a vascular portion of a segmentation target site included in the biological image (Wang, page 43, col. 1, last para, “these attention modules implicitly increase the weight associated with the target class in feature maps and generate soft region proposals on-the-fly, hence the most discriminative features for sclera segmentation are highlighted and irrelevant features in the non-sclera regions are suppressed”). Regarding Claim 13, Han in view of Wang teaches the biological image processing method according to claim 11 for the computer to perform: expanding the channels to the size of the biological image that has been input and outputting the feature image, after contracting the channels (Wang, page 44, col. 1, “to recover the spatial information lost in pooling layers of the contracting path and meanwhile reduce the number of channels, the expansive path adopts a series of bilinear upsampling operations, followed by two 3×3 convolutional units. Then, high-resolution features from the contracting path and the upsampled output from the expansive path are concatenated via skip connections for further information fusion and more precise localization… Finally, a 1 × 1 convolutional layer and a sigmoid activation function are used to output the probability map of sclera segmentation, which has the same size as the original input”). Regarding Claim 15, Han in view of Wang teaches the biological image processing method according to claim 11, wherein the biological image is an image of an eyeball (Han, Fig. 5 (a) is an eye map; page 2, step 1: “obtaining the retina blood vessel image to be detected”), and the computer readable biological image processing instructions causes the processor circuitry to perform outputting information relating to the feature image obtained by segmenting a foveal avascular zone of the eyeball (Han, Fig. 5 and 6(e) “the NoL-UNet result of the invention”; Han, page 2, step 3, “step 3: outputting retinal vascular image segmentation result; Wang, page 47, col. 2, “for each image, the corresponding per-pixel annotations of sclera, sclera vascular structures, iris, pupil and periocular region are provided”). Regarding Claim 16, Han in view of Wang teaches the non-transitory computer readable storage medium according to claim 2, causing the computer to perform: expanding the channels to the size of the biological image that has been input and outputting the feature image, after contracting the channels (Wang, page 44, col. 1, “to recover the spatial information lost in pooling layers of the contracting path and meanwhile reduce the number of channels, the expansive path adopts a series of bilinear upsampling operations, followed by two 3×3 convolutional units. Then, high-resolution features from the contracting path and the upsampled output from the expansive path are concatenated via skip connections for further information fusion and more precise localization… Finally, a 1 × 1 convolutional layer and a sigmoid activation function are used to output the probability map of sclera segmentation, which has the same size as the original input”). Regarding Claim 17, Han in view of Wang teaches the biological image processing apparatus according to claim 7, wherein, the processor circuitry expands the channels to the size of the biological image that has been input and outputting the feature image, after contracting the channels (Wang, page 44, col. 1, “to recover the spatial information lost in pooling layers of the contracting path and meanwhile reduce the number of channels, the expansive path adopts a series of bilinear upsampling operations, followed by two 3×3 convolutional units. Then, high-resolution features from the contracting path and the upsampled output from the expansive path are concatenated via skip connections for further information fusion and more precise localization… Finally, a 1 × 1 convolutional layer and a sigmoid activation function are used to output the probability map of sclera segmentation, which has the same size as the original input”). Regarding Claim 18, Han in view of Wang teaches the biological image processing method according to claim 12, for the computer to perform: expanding the channels to the size of the biological image that has been input and outputting the feature image, after contracting the channels (Wang, page 44, col. 1, “to recover the spatial information lost in pooling layers of the contracting path and meanwhile reduce the number of channels, the expansive path adopts a series of bilinear upsampling operations, followed by two 3×3 convolutional units. Then, high-resolution features from the contracting path and the upsampled output from the expansive path are concatenated via skip connections for further information fusion and more precise localization… Finally, a 1 × 1 convolutional layer and a sigmoid activation function are used to output the probability map of sclera segmentation, which has the same size as the original input”). Claims 4, 9 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Han et al. (See Machine Translation for CN 113592843 A; hereafter referred to as Han) in view of Wang et al. (Wang, C., Wang, Y., Liu, Y., He, Z., He, R., & Sun, Z. (2019). ScleraSegNet: An attention assisted U-Net model for accurate sclera segmentation. IEEE Transactions on Biometrics, Behavior, and Identity Science, 2(1), 40-54; hereafter referred to as Wang) further in view of Iwase et al. (See Machine Translation for JP 2020166813 A; hereafter referred to as Iwase). Regarding Claim 4, Han in view of Wang teaches the non-transitory computer readable storage medium according to claim 1, but fails to explicitly teach: calculating a vascular density for each of one or more regions in the biological image based on the information relating to the feature image. In the same field of endeavor, Iwase teaches: calculating a vascular density for each of one or more regions in the biological image based on the information relating to the feature image (Iwase, page 26, 17th embodiment, “The analysis unit 2208 applies a predetermined image analysis process to the high-quality image generated by the high-quality image unit 404. In the field of ophthalmology, for example, image analysis processing includes segmentation of the retinal layer, layer thickness measurement, papillary three-dimensional shape analysis, sieve plate analysis, vascular density measurement of OCTA images, and corneal shape analysis of images acquired by OCT. Includes any existing image analysis processing such as”). Han, Wang and Iwase are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Han in view of Wang with the invention of Iwase to make the invention that calculates the vascular density using the image analysis process; doing so can improve the image analysis and segmentation performance (Iwase, page 26, para 11); thus one of the ordinary skill the art would have been motivated to combine the references. Regarding Claim 9, Han in view of Wang teaches the biological image processing apparatus according to claim 6, but fails to explicitly teach: the processor circuitry calculates a vascular density for each of one or more regions in the biological image based on the information relating to the feature image. In the same field of endeavor, Iwase teaches: the processor circuitry calculates a vascular density for each of one or more regions in the biological image based on the information relating to the feature image (Iwase, page 26, 17th embodiment, “The analysis unit 2208 applies a predetermined image analysis process to the high-quality image generated by the high-quality image unit 404. In the field of ophthalmology, for example, image analysis processing includes segmentation of the retinal layer, layer thickness measurement, papillary three-dimensional shape analysis, sieve plate analysis, vascular density measurement of OCTA images, and corneal shape analysis of images acquired by OCT. Includes any existing image analysis processing such as”). Han, Wang and Iwase are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Han in view of Wang with the invention of Iwase to make the invention that calculates the vascular density using the image analysis process; doing so can improve the image analysis and segmentation performance (Iwase, page 26, para 11); thus one of the ordinary skill the art would have been motivated to combine the references. Regarding Claim 14, Han in view of Wang teaches the biological image processing method according to claim 11, but fails to explicitly teach: calculating a vascular density for each of one or more regions in the biological image based on the information relating to the feature image. In the same field of endeavor, Iwase teaches: calculating a vascular density for each of one or more regions in the biological image based on the information relating to the feature image (Iwase, page 26, 17th embodiment, “The analysis unit 2208 applies a predetermined image analysis process to the high-quality image generated by the high-quality image unit 404. In the field of ophthalmology, for example, image analysis processing includes segmentation of the retinal layer, layer thickness measurement, papillary three-dimensional shape analysis, sieve plate analysis, vascular density measurement of OCTA images, and corneal shape analysis of images acquired by OCT. Includes any existing image analysis processing such as”). Han, Wang and Iwase are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Han in view of Wang with the invention of Iwase to make the invention that calculates the vascular density using the image analysis process; doing so can improve the image analysis and segmentation performance (Iwase, page 26, para 11); thus one of the ordinary skill the art would have been motivated to combine the references. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. CN 113744178 A Skin Lesion Segmentation Method Based On Convolution Attention Model Shan, T., & Yan, J. (2021). SCA-Net: A spatial and channel attention network for medical image segmentation. IEEE Access, 9, 160926-160937. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAISALI RAO KOPPOLU whose telephone number is (571)270-0273. The examiner can normally be reached Monday - Friday 8:30 - 5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. VAISALI RAO. KOPPOLU Examiner Art Unit 2664 /JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Apr 12, 2024
Application Filed
Apr 12, 2024
Response after Non-Final Action
Jan 30, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586356
ARTIFICIAL IMAGE GENERATION WITH TRAFFIC SIGNS
2y 5m to grant Granted Mar 24, 2026
Patent 12579680
IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12579824
OCCUPANT DETECTION DEVICE AND OCCUPANT DETECTION METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12573210
PARKING ASSISTANCE DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12573087
OBJECT THREE-DIMENSIONAL LOCALIZATIONS IN IMAGES OR VIDEOS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+26.8%)
2y 12m
Median Time to Grant
Low
PTA Risk
Based on 113 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month