Prosecution Insights
Last updated: April 19, 2026
Application No. 17/586,286

METHOD FOR DETECTING SERIAL SECTION OF MEDICAL IMAGE

Non-Final OA §103
Filed
Jan 27, 2022
Examiner
BARNES JR, CARL E
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
Vuno Inc.
OA Round
5 (Non-Final)
32%
Grant Probability
At Risk
5-6
OA Rounds
4y 4m
To Grant
57%
With Interview

Examiner Intelligence

Grants only 32% of cases
32%
Career Allow Rate
65 granted / 202 resolved
-22.8% vs TC avg
Strong +25% interview lift
Without
With
+25.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
32 currently pending
Career history
234
Total Applications
across all art units

Statute-Specific Performance

§101
14.3%
-25.7% vs TC avg
§103
62.6%
+22.6% vs TC avg
§102
9.0%
-31.0% vs TC avg
§112
8.7%
-31.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 202 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/02/2026 has been entered. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. KR10-2021-0012162, filed on 01/28/2021. Response to Amendment Claims 1-4, and 6-13 were previously pending and subject to a final action filed 10/02/2025. In the response filed on 01/02/2026, claims 1 and 12-13 were amended. Therefore, claims 1–4, and 6-13 are currently pending and subject to the non-final action below. Response to Arguments Applicant's arguments, see pages 6-9, filed 01/02/2026, with respect to claims 1-13 under 35 U.S.C. 103 have been fully considered but they are not persuasive. Applicant arguments: Applicant’s respectfully submitted that Brieu, Mouton, and Chukka, whether taken alone or in combination, fail to teach or suggest the claimed features. Brieu is performed between two different digital images, Brieu fails to teach or suggest calculating a distance between segments within the same image or estimating the number of serial sections within the same image. Claim 1, specifically, "identifying tissue sections corresponding to the serial section within the medical image based on the estimated number of tissue sections and the estimated distance between the segments" cannot be derived from Brieu. Mouton similarly fails to remedy the deficiencies of Brieu. While Mouton discloses a technique for detecting tissue segments in a slide image, it focuses on detecting image profiles (regions) from a plurality of images generated from a plurality of objects and associating which object each profile belongs to. Mouton analyzes which profiles among a plurality of subimages (profiles) included in different images correspond to that same cell. Mouton fails to teach the specific configuration of "identifying tissue sections corresponding to the serial section within the medical image based on the estimated number of tissue sections and the estimated distance between the segments" within a single image, as recited in Amended Claim 1. Chukka is silent regarding distinguishing a plurality of serial sections within the same slide or estimating the spatial relationship between such sections. In view of the foregoing, the cited references fail to disclose or suggest the specific combination of features recited in Amended Claim 1. Examiner Response: After carefully reviewing applicant’s argument and prior art of record. The examiner respectfully disagrees for the following reason below. Brieu teaches: A method for detecting a serial section of a medical image, (Brieu − [0049] Fig. 4, method for coregistering digital images of tissue slices (serial section) obtain from needle biopsies. Fig. 1, reference 11 digital images of tissues slices stored in database (12)) method performed by a computing device including at least one processor, the method comprising: ([0042] FIG. 1 shows a system 10 for coregistering digital images of tissue slices obtained from needle biopsies.) the method performed by a computing device including at least one processor, the method comprising: (Brieu − [0042] FIG. 1 shows a system 10 for coregistering digital images of tissue slices obtained from needle biopsies.) detecting segments included in at least one tissue which exists in the medical image; (Brieu − [0050-0051] Fig. 4, In a first step 26, first image objects are generated from a first digital image 39 taken of first tissue slice 20. The first image objects are segments.) estimating a number of tissue sections corresponding to the serial section (Brieu − [0096-0097] Fig. 16. characteristics such as length, width, area and shape index of the image objects are quantified for each of the image objects in each of the digital images. Each possible pair of objects among all images corresponds to a configuration characterized by a similarity measure that is the inverse of the sum of the absolute differences of the geometric characteristics of the two objects.) and identifying tissue sections corresponding to the serial section based on the estimated number of tissue sections (Brieu − [0096-0097] Fig. 16. characteristics such as length, width, area and shape index of the image objects are quantified for each of the image objects in each of the digital images. Each possible pair of objects among all images corresponds to a configuration characterized by a similarity measure that is the inverse of the sum of the absolute differences of the geometric characteristics of the two objects.) Mouton teaches: estimating a number of tissue sections (Mouton − [0006] [0032] obtain accurate and efficient stereology-based estimates of the number and size of biological objects (e.g., cells) in tissue sections.) based on a number of the at least one local point; (Mouton − [0016] FIG. 5(b) shows the same two EDF images as FIG. 5(a), after segmentation. [0052] Fig. 11 shows newly selected boundary points and smoothing affect the previous boundary of a cell in a synthetic image. [0053] Because images collected in datasets will have varying brightness, intensity thresholds can be set adaptively by the estimated GMM for each image, allowing the algorithm to generate consistent segmentations for different cell types; Segmentation of different cell types by using intensity thresholds (color intensity values)) estimating a distance between the segments based on the segments in the medical image based on at least one local point for each segment; (Mouton − [0043-0045] [0045] The focus distance of the (i,j) and (i0,j0)-th grid squares, Si0,j0, can then be defined by the i,j Euclidean distance of their corresponding normalized focus vectors as shown in Equation 2 of FIG. 8. Equation 3 shows the measure of the closeness of (i,j) and (i0,j0)-th grid squares. Finally, the likelihood of the (i,j) and (i0,j0)-th grid squares belonging to the same cell can be estimated by Equation 4. Euclidean distance equation to identify cells that below to each other on different grid squares.) and identifying tissue sections corresponding to the serial section with the image based on the estimated number of tissue sections and the estimated distance between the segments. (Mouton − [0043-0046] [0045] The focus distance of the (i,j) and (i0,j0)-th grid squares, Si0,j0, can then be defined by the i,j Euclidean distance of their corresponding normalized focus vectors as shown in Equation 2 of FIG. 8. Equation 3 shows the measure of the closeness of (i,j) and (i0,j0)-th grid squares. Finally, the likelihood of the (i,j) and (i0,j0)-th grid squares belonging to the same cell can be estimated by Equation 4. [0046] Using the likelihood measure, L, defined above for two subimages belonging to the same cell, the likelihood of a subimage belonging to the cytoplasm of a particular cell is estimated by considering the fact that its nucleus is part of the cell. Euclidean distance equation to identify cells that below to each other on different grid squares. [0159] On completion, images in each disector stack were merged into a single synthetic Extended Depth of Field (EDF) image) Chukka teaches: extracting at least one local point for each of regions corresponding to each of the segments based on sizes of differences of color intensity values between the segments and an entire region of the medical image (Chukka − [abstract] identifies dominant color regions (intensity values) within the tissue data and identifies seed points (local points) within those regions using image segmentation techniques; [0051] Fig. 5 In block 560, the component characterizes each of the identified objects based at least in part on location and any number of characteristics including, for example, color characteristics, shape and size characteristics;) being equal or less than a threshold; (Chukka − [0050] those pixels whose image gradient magnitude is greater than or equal to the gradient magnitude threshold) wherein the at least one local point for each segment is extracted in the entire region of the medical image including an external region of the segment. (Chukka − [0051] In block 530, the component retrieves the seed points (local point) that correspond to the region currently being analyzed.) Mouton is recited for teaching the limitations of “estimating a distance between the segments based on the segments in the medical image based on at least one local point for each segment; and identifying tissue sections corresponding to the serial section with the image based on the estimated number of tissue sections and the estimated distance between the segments.” Applicant’s does not recite the limitation “within a single image” in the amended claims. The term “the medical image” can be interpreted as a z -stack of image slice(s) merged together as one image on completion, which is a single image. Mouton recites [0159] On completion, images in each disector stack were merged into a single synthetic Extended Depth of Field (EDF) image. Therefore, the rejection is maintained. Examiner Notes The disclosure does not define the term serial section, segments or tissue sections. One of ordinary skill in the art would understands that a serial section is a series of sections cut in sequence by a microtome (i.e. medical knife) from a prepared specimen (as of tissue). A tissue section is block/area of the specimen (as of tissue) that can include objects (e.g. cells). A segment is objects (cells, nodules) within the prepared specimen (as of tissue). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-4, and 6-13 are rejected under 35 U.S.C. 103 as being unpatentable over Brieu (US PGPUB: 20140228707 A1) in view of Mouton (US PGPUB: 20190272638 A1) in view of Chukka (US 20160042511 A1, Filed Date: Nov. 17, 2004). Regarding independent claim 1, Brieu teaches: A method for detecting a serial section of a medical image, (Brieu − [0049] Fig. 4, method for coregistering digital images of tissue slices (serial section) obtain from needle biopsies. Fig. 1, reference 11 digital images of tissues slices stored in database (12)) method performed by a computing device including at least one processor, the method comprising: ([0042] FIG. 1 shows a system 10 for coregistering digital images of tissue slices obtained from needle biopsies.) the method performed by a computing device including at least one processor, the method comprising: (Brieu − [0042] FIG. 1 shows a system 10 for coregistering digital images of tissue slices obtained from needle biopsies.) detecting segments included in at least one tissue which exists in the medical image; (Brieu − [0050-0051] Fig. 4, In a first step 26, first image objects are generated from a first digital image 39 taken of first tissue slice 20. The first image objects are segments.) estimating a number of tissue sections corresponding to the serial section (Brieu − [0096-0097] Fig. 16. characteristics such as length, width, area and shape index of the image objects are quantified for each of the image objects in each of the digital images. Each possible pair of objects among all images corresponds to a configuration characterized by a similarity measure that is the inverse of the sum of the absolute differences of the geometric characteristics of the two objects.) and identifying tissue sections corresponding to the serial section based on the estimated number of tissue sections (Brieu − [0096-0097] Fig. 16. characteristics such as length, width, area and shape index of the image objects are quantified for each of the image objects in each of the digital images. Each possible pair of objects among all images corresponds to a configuration characterized by a similarity measure that is the inverse of the sum of the absolute differences of the geometric characteristics of the two objects.) Brieu does not explicitly teach: distance between the segments based on the segments; However, Mouton teaches: estimating a number of tissue sections (Mouton − [0006] [0032] obtain accurate and efficient stereology-based estimates of the number and size of biological objects (e.g., cells) in tissue sections.) based on a number of the at least one local point; (Mouton − [0016] FIG. 5(b) shows the same two EDF images as FIG. 5(a), after segmentation. [0052] Fig. 11 shows newly selected boundary points and smoothing affect the previous boundary of a cell in a synthetic image. [0053] Because images collected in datasets will have varying brightness, intensity thresholds can be set adaptively by the estimated GMM for each image, allowing the algorithm to generate consistent segmentations for different cell types; Segmentation of different cell types by using intensity thresholds (color intensity values)) estimating a distance between the segments based on the segments in the medical image based on at least one local point for each segment; (Mouton − [0043-0045] [0045] The focus distance of the (i,j) and (i0,j0)-th grid squares, Si0,j0, can then be defined by the i,j Euclidean distance of their corresponding normalized focus vectors as shown in Equation 2 of FIG. 8. Equation 3 shows the measure of the closeness of (i,j) and (i0,j0)-th grid squares. Finally, the likelihood of the (i,j) and (i0,j0)-th grid squares belonging to the same cell can be estimated by Equation 4. Euclidean distance equation to identify cells that below to each other on different grid squares.) and identifying tissue sections corresponding to the serial section with the image based on the estimated number of tissue sections and the estimated distance between the segments. (Mouton − [0043-0046] [0045] The focus distance of the (i,j) and (i0,j0)-th grid squares, Si0,j0, can then be defined by the i,j Euclidean distance of their corresponding normalized focus vectors as shown in Equation 2 of FIG. 8. Equation 3 shows the measure of the closeness of (i,j) and (i0,j0)-th grid squares. Finally, the likelihood of the (i,j) and (i0,j0)-th grid squares belonging to the same cell can be estimated by Equation 4. [0046] Using the likelihood measure, L, defined above for two subimages belonging to the same cell, the likelihood of a subimage belonging to the cytoplasm of a particular cell is estimated by considering the fact that its nucleus is part of the cell. Euclidean distance equation to identify cells that below to each other on different grid squares. [0159] On completion, images in each disector stack were merged into a single synthetic Extended Depth of Field (EDF) image) Brieu and Mouton are analogous art because they are from the same problem-solving area, utilizing digital image processing on tissue sample for determining area of interest regarding tissue cells. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Brieu and Mouton before him or her, to combine the teachings of Brieu and Mouton. The rationale for doing so would have been to provide an automatic neural network for classification of cells with similar characteristics correctly for same group of cells as discussed by Mouton ([0006]). Therefore, it would have been obvious to combine Brieu and Mouton to obtain the invention as specified in the instant claim(s). Brieu does not explicitly teach: extracting at least one local point for each of regions corresponding to each of the segments based on sizes of differences of color intensity values between the segments and an entire region of the medical image; However, Chukka teaches: extracting at least one local point for each of regions corresponding to each of the segments based on sizes of differences of color intensity values between the segments and an entire region of the medical image (Chukka − [abstract] identifies dominant color regions (intensity values) within the tissue data and identifies seed points (local points) within those regions using image segmentation techniques; [0051] Fig. 5 In block 560, the component characterizes each of the identified objects based at least in part on location and any number of characteristics including, for example, color characteristics, shape and size characteristics;) being equal or less than a threshold; (Chukka − [0050] those pixels whose image gradient magnitude is greater than or equal to the gradient magnitude threshold) wherein the at least one local point for each segment is extracted in the entire region of the medical image including an external region of the segment. (Chukka − [0051] In block 530, the component retrieves the seed points (local point) that correspond to the region currently being analyzed.) Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the teaching of Brieu, Mouton and Chukka as each invention relates to utilizing digital image processing on tissue sample for determining area of interest regarding tissue cells. Therefore, providing an automatic neural network for classification of cells with similar characteristics. Regarding dependent claim 2, depends on claim 1, Brieu teaches: Brieu teaches: wherein the detecting the segments includes detecting the segments included in the at least one tissue which exists in the medical image (Brieu − [0049] Fig. 4, method for coregistering digital images of tissue slices (serial section) obtain from needle biopsies. Fig. 1, reference 11 digital images of tissues slices stored in database (12)) method performed by a computing device including at least one processor, the method comprising: (Brieu − [0042] FIG. 1 shows a system 10 for coregistering digital images of tissue slices obtained from needle biopsies.) Brieu does not explicitly teach a pre-learned deep learning model. However, Mouton teaches: by inputting the medical image in a pre-learned deep learning model. (Mouton − [0054] Embodiments of the subject invention provide an automation platform for scientists, such as neuroscientists, to complete unbiased stereology studies with greater accuracy, precision, speed, and lower costs. In some embodiments, the automatic stereology of the invention can use machine learning, including deep learning from a convolutional neural network (CNN) and adaptive segmentation algorithms (ASA) to segment stained cells from EDF images created from 3-D disector volumes.) Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the teaching of Brieu, Mouton and Chukka as each invention relates to utilizing digital image processing on tissue sample for determining area of interest regarding tissue cells. Therefore, providing an automatic neural network for classification of cells with similar characteristics. Regarding dependent claim 3, depends on claim 1, Brieu teaches: wherein the detecting the segments includes: determining candidate segments included in the at least one tissue which exists in the medical image based on an intensity of the medical image; (Brieu – [0047] Fig. 2, tissue reacting to biomarkers such as hematoxylin and eosin (H&E). [0069] Landmarks is located over a nucleus exhibiting a particular length, width, area, linearity (or non-linearity) and average pixel intensity.) and determining segments corresponding to a detection object from the candidate segments based on sizes of the candidate segments. (Brieu – [0096] In step 102, characteristics such as length, width, area and shape index of the image objects are quantified for each of the image objects in each of the digital images.) Regarding dependent claim 4, depends on claim 1, Brieu teaches: wherein the estimating the number of tissue sections and the distance between the segments includes: calculating difference values between the segments and an entire region by comparing each of the segments with the entire region of the medical image; (Brieu – [0051] Fig. 5, Each digital image comprises pixel values associated with the locations of each of the pixels 42. Thus, a pixel represents a position or spatial location on a digital image. The image analysis program operates on the digital pixel values and links the pixels to form objects. Each object is linked to a set of pixel locations based on the associated pixel values. [0097] In step 103, pairs of objects from among all of the images are compared based on the similarity of geometric characteristics of the paired objects.) estimating the number of tissue sections corresponding to the serial section based on the at least one local point by considering sizes of the segments. (Brieu – [0105] FIGS. 19A-B illustrate how additional landmarks are placed on corrected middle paths. FIG. 19A shows an image object 150 with a middle path 151. FIG. 19B shows a paired image object 152 with a middle path 153.) extracting the at least one local point includes determining a point where the sizes of the difference values are equal to or less than a threshold in the entire region of the medical image as the at least one local point. (Brieu – [0044] The parameters by which the image analysis is performed, for example thresholds of brightness or size. [0051] Fig.5 Thresholds of brightness at pixel locations that are grouped together can be obtained from a histogram of the pixel values in the digital image.) Regarding dependent claim 6, depends on claim 4, Brieu teaches: wherein the estimating the number of tissue sections corresponding to the serial section based on the local point includes estimating the number of tissue sections corresponding to the serial section by performing voting for the local point with the size of each of the segments as a weight. (Brieu – [0012] The spatial relationship between the first pixel and the multiple first landmarks is defined by assigning a larger weighting to those first landmarks that are closer to the first pixel.) Regarding dependent claim 7 depends on claim 1, Brieu teaches: wherein the estimating the number of tissue sections and the distance between the segments includes: performing geometric transform for the segments; comparing difference values between segments to which the geometric transform is applied and regions matched by the geometric transform of the segments; (Brieu − [0097] In step 103, pairs of objects from among all of the images are compared based on the similarity of geometric characteristics of the paired objects. Each possible pair of objects from among all of the images is characterized by a similarity measure, which is the inverse of the sum of the absolute differences of the geometric characteristics of the two objects. For example, in a scenario of four digital images each containing five image objects, there are one hundred fifty possible pairs of image objects to compare.) Brieu does not explicitly teach and estimating the distance between the segments However, Mouton teaches: and estimating the distance between the segments based on a result of the comparison. (Mouton − [0043-0045] [0045] The focus distance of the (i,j) and (i0,j0)-th grid squares, Si0,j0, can then be defined by the i,j Euclidean distance of their corresponding normalized focus vectors as shown in Equation 2 of FIG. 8. Equation 3 shows the measure of the closeness of (i,j) and (i0,j0)-th grid squares. Finally, the likelihood of the (i,j) and (i0,j0)-th grid squares belonging to the same cell can be estimated by Equation 4. [0046] Using the likelihood measure, L, defined above for two subimages belonging to the same cell, the likelihood of a subimage belonging to the cytoplasm of a particular cell is estimated by considering the fact that its nucleus is part of the cell. Euclidean distance equation to identify cells that below to each other on different grid squares.) Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the teaching of Brieu, Mouton and Chukka as each invention relates to utilizing digital image processing on tissue sample for determining area of interest regarding tissue cells. Therefore, providing an automatic neural network for classification of cells with similar characteristics. Regarding dependent claim 8, depends on claim 7, Brieu teaches: wherein the estimating the distance between the segments based on the result of the comparison includes estimating the distance between the segments based on a degree at which difference values between the segments to which the geometric transform is applied and the regions matched by the geometric transform of the segments correspond to each other. (Brieu − [0097] In step 103, pairs of objects from among all of the images are compared based on the similarity of geometric characteristics of the paired objects. Each possible pair of objects from among all of the images is characterized by a similarity measure, which is the inverse of the sum of the absolute differences of the geometric characteristics of the two objects. For example, in a scenario of four digital images each containing five image objects, there are one hundred fifty possible pairs of image objects to compare.) Regarding dependent claim 9, depends on claim 1, Brieu teaches: wherein the identifying the tissue sections corresponding to the serial section includes: generating a graph based on the distance between the segments; and identifying the tissue sections corresponding to the serial section by splitting the graph based on the estimated number of tissue sections. (Brieu − [0102-0103] The spanning tree is a connected and undirected graph for which all of the vertices (image objects) lie in the tree and which contains no closed loops. For example, a spanning tree between objects in four slices can include the following pairs: object 1 in slice 1 paired with object 2 in slice 2; object 2 in slice 2 paired with object 3 in slice 3; and object 3 in slice 3 paired with object 4 in slice 4.) Regarding dependent claim 10, depends on claim 9, Brieu teaches: wherein the graph includes: a node with sizes of the segments as a weight; and an edge with the distance between the segments as a weight. (Brieu − [0012] [0102-0103] The spatial relationship between the first pixel and the multiple first landmarks is defined by assigning a larger weighting to those first landmarks that are closer to the first pixel The spanning tree contains nodes in the graph. [0103] An edge is defined between two image objects at the vertices of the spanning tree if the two objects are directly paired. A weight is assigned to each edge of the tree as the inverse of the comparison factor between the image objects linked by the edge. In one implementation, the weight of an edge between two image objects is the inverse of the overlap ratio between the two image objects.) Regarding dependent claim 11, depends on claim 9, Brieu teaches: wherein the identifying the tissue sections corresponding to the serial section by splitting the graph based on the estimated number of tissue sections includes: splitting the graph to suit the estimated number of tissue sections according to the distance between the segments; grouping the segments based on the graph split to suit the estimated number of tissue sections; and identifying each segment group generated through the grouping as one tissue section. (Brieu − [0012] [0102-0103] The spatial relationship between the first pixel and the multiple first landmarks is defined by assigning a larger weighting to those first landmarks that are closer to the first pixel The spanning tree contains nodes in the graph. [0103] An edge is defined between two image objects at the vertices of the spanning tree if the two objects are directly paired. A weight is assigned to each edge of the tree as the inverse of the comparison factor between the image objects linked by the edge. In one implementation, the weight of an edge between two image objects is the inverse of the overlap ratio between the two image objects.) Regarding independent claim 12, is directed to non-transitory computer-readable storage medium. Claim 12 have similar/same technical features/limitation as claim 1 and the claims are rejected under the same rational. Regarding independent claim 13, is directed to a computing device. Claim 13 have similar/same technical features/limitation as claim 1 and the claims are rejected under the same rational. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CARL E BARNES JR whose telephone number is (571)270-3395. The examiner can normally be reached Monday-Friday 9am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CARL E BARNES JR/Examiner, Art Unit 2178 /STEPHEN S HONG/Supervisory Patent Examiner, Art Unit 2178
Read full office action

Prosecution Timeline

Jan 27, 2022
Application Filed
May 02, 2024
Non-Final Rejection — §103
Aug 08, 2024
Response Filed
Nov 16, 2024
Final Rejection — §103
Feb 24, 2025
Request for Continued Examination
Feb 26, 2025
Response after Non-Final Action
Jun 10, 2025
Non-Final Rejection — §103
Sep 16, 2025
Response Filed
Sep 30, 2025
Final Rejection — §103
Jan 02, 2026
Request for Continued Examination
Jan 08, 2026
Response after Non-Final Action
Jan 09, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12584932
SLIDE IMAGING APPARATUS AND A METHOD FOR IMAGING A SLIDE
2y 5m to grant Granted Mar 24, 2026
Patent 12541640
COMPUTING DEVICE FOR MULTIPLE CELL LINKING
2y 5m to grant Granted Feb 03, 2026
Patent 12536464
SYSTEM FOR CONSTRUCTING EFFECTIVE MACHINE-LEARNING PIPELINES WITH OPTIMIZED OUTCOMES
2y 5m to grant Granted Jan 27, 2026
Patent 12530765
SYSTEMS AND METHODS FOR CALCIUM-FREE COMPUTED TOMOGRAPHY ANGIOGRAPHY
2y 5m to grant Granted Jan 20, 2026
Patent 12530523
METHOD, APPARATUS, SYSTEM, AND COMPUTER PROGRAM FOR CORRECTING TABLE COORDINATE INFORMATION
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
32%
Grant Probability
57%
With Interview (+25.2%)
4y 4m
Median Time to Grant
High
PTA Risk
Based on 202 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month