Prosecution Insights
Last updated: April 19, 2026
Application No. 18/484,273

TOUCHLESS VOLUME WAVEFORM SAMPLING TO DETERMINE RESPIRATION RATE

Non-Final OA §103§112
Filed
Oct 10, 2023
Examiner
FUJITA, KATRINA R
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Covidien LP
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
94%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
472 granted / 674 resolved
+8.0% vs TC avg
Strong +24% interview lift
Without
With
+24.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
25 currently pending
Career history
699
Total Applications
across all art units

Statute-Specific Performance

§101
11.3%
-28.7% vs TC avg
§103
55.7%
+15.7% vs TC avg
§102
15.3%
-24.7% vs TC avg
§112
11.8%
-28.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 674 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Notes Claim 6 recites a limitation of the form “at least one of A and B” with respect to “at least one of a visualization mask…and the bounding box”. In accordance with the U.S. Court of Appeals for the Federal Circuit in SuperGuide Corp v. DirecTV Enterprises, Inc., these limitations are conjunctive in nature and to be construed as “at least one of A and at least one of B”. Therefore, these claims are addressed herein as requiring each of these steps rather than the alternative of A or B. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 9 and 14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 9 recites a limitation of the form where the processing circuitry is configured to: A, B, C, D, E, F or G. While the specification appears to disclose A-G as alternatives to each other and not done in conjunction with each other, the claim does not make clear that only one of them is performed. Rather, the claim language appears to state that A-E are performed along with either F or G. As such, this is unclear. For purposes of examination, the Examiner will assume that one of A-G is performed. Further clarification is required. Similar reasoning applies to claim 14. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 5, 7, 10 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Addison et al. (US 20190209046) and Zhou et al. (US 2019/0130191). Regarding claim 1, Addison et al. discloses a system for non-contact monitoring respiration of a patient comprising: at least one camera sensor configured to capture image data associated with a patient (“In some embodiments, the image capture device 385 is a remote sensing device such as a video camera. In some embodiments, the image capture device 385 may be some other type of device, such as a proximity sensor or proximity sensor array, a heat or infrared sensor/camera, a sound/acoustic or radiowave emitter/detector, or any other device that may be used to monitor the location of a patient and an ROI of a patient to determine tidal volume” at paragraph 0090, line 1); a memory (“The computing device 300 includes a processor 315 that is coupled to a memory 305” at paragraph 0089, line 5); and processing circuitry coupled to the at least one camera sensor and the memory (“The processor 335 can store and recall data and applications in the memory 330” at paragraph 0091, line 2; “With this configuration, the processor 315, and subsequently the computing device 300, can communicate with other devices, such as the server 325 through a connection 370 and the image capture device 385 through a connection 380” at paragraph 0089, line 14), the processing circuitry being configured to: obtain the image data (“The camera 214 generates a sequence of images over time. The camera 214 may be a depth sensing camera, such as a Kinect camera from Microsoft Corp. (Redmond, Wash.). A depth sensing camera can detect a distance between the camera and objects in its field of view. Such information can be used, as disclosed herein, to determine that a patient is within the field of view of the camera 214 and determine a region of interest (ROI) to monitor on the patient” at paragraph 0083, line 1); determine a bounding box for the image data based on pixels within the image data associated with respiration (“Once an ROI is identified, that ROI can be monitored over time, and the change in depth of points within the ROI can represent movements of the patient associated with breathing. Accordingly, those movements, or changes of points within the ROI, can be used to determine tidal volume as disclosed herein” at paragraph 0083, second to last sentence; “The image includes a patient 390 and a region of interest (ROI) 395. The ROI 395 can be used to determine a volume measurement from the chest of the patient 390. The ROI 395 is located on the patient's chest. In this example, the ROI 395 is a square box” at paragraph 0095, line 1); determine a sampling region (“Accordingly, an ROI may be dynamically selected, so that an optimum sampling region based on depth data and skeleton coordinates is continually determined and refreshed as described below” at paragraph 0109, last sentence; the bounding box in this instance encompasses the entire sampling region); determine at least a portion of a volume waveform based at least in part on the sampling region (“The position of individual points within the ROI 395 may be integrated across the area of the ROI 395 to provide a change in volume over time as shown in FIGS. 4 and 5. FIG. 4 is a graph showing a tidal volume calculation over time according to various embodiments described herein” at paragraph 0095, second to last sentence); determine at least one respiration parameter based on the volume waveform (“The peaks and valleys of the signal in FIG. 4 can be used to identify individual breaths, the size of individual breaths, and a patient's overall respiration rate” at paragraph 0097, line 6); and output, for display, the at least one respiration parameter (“The processor 315 may also display objects, applications, data, etc. on an interface/display 310” at paragraph 0089, line 10; also in conjunction with the disclosure of paragraph 0140 and figure 4, it is implied that the respiration rate is able to be displayed as a relevant vital sign). Addison et al. does not explicitly disclose that the sampling region being based on an average including the bounding box and at least one previously determined bounding box. Zhou et al. teaches a system in the same field of endeavor of bounding box based object detection, comprising: processing circuitry coupled to the at least one camera sensor and the memory (“In some cases, the computing device or apparatus may include a processor, microprocessor, microcomputer, or other component of a device that is configured to carry out the steps of processes 1200-1900. In some examples, the computing device or apparatus may include a camera configured to capture video data (e.g., a video sequence) including video frames” at paragraph 0188, line 5), the processing circuitry being configured to: determine a sampling region, the sampling region being based on an average including the bounding box and at least one previously determined bounding box (“Here, width.sub.hist and height.sub.hist correspond to, respectively, the average width and the average height of the historical output bounding box across the set of previous frames stored in buffer queue 1102, whereas width.sub.c, and height correspond to, respectively, the width and the height of the candidate bounding box (included in input attributes 1010). The value of width.sub.hist can be determined by averaging the sum of widths of the historical output bounding box for each of the set of previous frames, with M representing the number of the set of previous frames (e.g., eight) for which the historical attributes are stored in buffer queue 1102. Further, the value of height.sub.hist can be determined by averaging the sum of heights of the historical output bounding box for each of the set of previous frames, with M representing the number of the set of previous frames (e.g., eight) for which the historical attributes are stored in buffer queue 1102” at paragraph 0159, line 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize the bounding box averaging as taught by Zhou et al. to determine the ROI of Addison et al. “to reduce a rate of change these attributes of the output bounding region over a set of continuous frames” (Zhou et al. at paragraph 0012, second to last sentence) and “more accurate tracking of an object can be performed using the output bounding region” (Zhou et al. at paragraph 0012, last sentence). Regarding claim 5, Zhou et al. discloses a system wherein as part of determining the sampling region, the processing circuitry is configured to determine a running average based on the bounding box and a set number of immediately preceding previously determined bounding boxes (“Here, width.sub.hist and height.sub.hist correspond to, respectively, the average width and the average height of the historical output bounding box across the set of previous frames stored in buffer queue 1102, whereas width.sub.c, and height correspond to, respectively, the width and the height of the candidate bounding box (included in input attributes 1010). The value of width.sub.hist can be determined by averaging the sum of widths of the historical output bounding box for each of the set of previous frames, with M representing the number of the set of previous frames (e.g., eight) for which the historical attributes are stored in buffer queue 1102. Further, the value of height.sub.hist can be determined by averaging the sum of heights of the historical output bounding box for each of the set of previous frames, with M representing the number of the set of previous frames (e.g., eight) for which the historical attributes are stored in buffer queue 1102” at paragraph 0159, line 1). Regarding claim 7, the Addison et al. and Zhou et al. combination discloses a system wherein as part of determining the bounding box, the processing circuitry is configured to at least one of: execute a bounding box algorithm to determine the bounding box (“FIG. 6 is a flowchart of a method 600 for determining a region of interest (ROI) and measuring tidal volume according to various embodiments described herein. The method 600 includes receiving at least one image comprising at least part of a patient at 605. The method 600 further includes determining a skeleton or reference point of the patient at 610. The method 600 further includes determining a region of interest (ROI) based at least in part on the skeleton or reference point at 615. In some embodiments, methods or measurements other than a skeleton may be used to determine the ROI. For example, the system may identify points on the patient's body (such as shoulders, head, neck, waist, etc.) that correspond to specific places that can be used as a centroid, reference, or flood fill point for forming an ROI” Addison et al. at paragraph 0098, line 1; “Here, width.sub.hist and height.sub.hist correspond to, respectively, the average width and the average height of the historical output bounding box across the set of previous frames stored in buffer queue 1102, whereas width.sub.c, and height correspond to, respectively, the width and the height of the candidate bounding box (included in input attributes 1010). The value of width.sub.hist can be determined by averaging the sum of widths of the historical output bounding box for each of the set of previous frames, with M representing the number of the set of previous frames (e.g., eight) for which the historical attributes are stored in buffer queue 1102. Further, the value of height.sub.hist can be determined by averaging the sum of heights of the historical output bounding box for each of the set of previous frames, with M representing the number of the set of previous frames (e.g., eight) for which the historical attributes are stored in buffer queue 1102” Zhou et al. at paragraph 0159, line 1); determine a visualization mask, the visualization mask comprising the pixels within the image data associated with respiration, wherein the bounding box is a smallest rectangle that includes all of the visualization mask; or execute a semantic segmentation algorithm to determine the visualization mask, wherein the bounding box is the smallest rectangle that includes all of the visualization mask. Regarding claim 10, Addison et al. discloses a method for non-contact monitoring respiration of a patient, the method comprising: obtaining the image data from at least one camera sensor (“The camera 214 generates a sequence of images over time. The camera 214 may be a depth sensing camera, such as a Kinect camera from Microsoft Corp. (Redmond, Wash.). A depth sensing camera can detect a distance between the camera and objects in its field of view. Such information can be used, as disclosed herein, to determine that a patient is within the field of view of the camera 214 and determine a region of interest (ROI) to monitor on the patient” at paragraph 0083, line 1); determining a visualization mask based on pixels within the image data associated with respiration (“FIG. 10 is a diagram showing a patient with a superimposed skeleton and ROI according to various embodiments described herein. A two-dimensional body mask 1005 can also be inferred from the skeletal coordinates” at paragraph 0111, line 1); determining a bounding box for the image data (“Once an ROI is identified, that ROI can be monitored over time, and the change in depth of points within the ROI can represent movements of the patient associated with breathing. Accordingly, those movements, or changes of points within the ROI, can be used to determine tidal volume as disclosed herein” at paragraph 0083, second to last sentence; “The image includes a patient 390 and a region of interest (ROI) 395. The ROI 395 can be used to determine a volume measurement from the chest of the patient 390. The ROI 395 is located on the patient's chest. In this example, the ROI 395 is a square box” at paragraph 0095, line 1) based on the visualization mask (“A two-dimensional body mask 1005 can also be inferred from the skeletal coordinates and encompasses the breathing ROI” at paragraph 0111, line 3; though not explicit, the ROI is ensured to be contained within the body mask); determine a sampling region (“Accordingly, an ROI may be dynamically selected, so that an optimum sampling region based on depth data and skeleton coordinates is continually determined and refreshed as described below” at paragraph 0109, last sentence; the bounding box in this instance encompasses the entire sampling region); determine at least a portion of a volume waveform based at least in part on the sampling region (“The position of individual points within the ROI 395 may be integrated across the area of the ROI 395 to provide a change in volume over time as shown in FIGS. 4 and 5. FIG. 4 is a graph showing a tidal volume calculation over time according to various embodiments described herein” at paragraph 0095, second to last sentence); determine at least one respiration parameter based on the volume waveform (“The peaks and valleys of the signal in FIG. 4 can be used to identify individual breaths, the size of individual breaths, and a patient's overall respiration rate” at paragraph 0097, line 6); and output, for display, the at least one respiration parameter (“The processor 315 may also display objects, applications, data, etc. on an interface/display 310” at paragraph 0089, line 10; also in conjunction with the disclosure of paragraph 0140 and figure 4, it is implied that the respiration rate is able to be displayed as a relevant vital sign). Addison et al. does not explicitly disclose that the sampling region being based on an average including the bounding box and at least one previously determined bounding box. Zhou et al. teaches a system in the same field of endeavor of bounding box based object detection, comprising: processing circuitry coupled to the at least one camera sensor and the memory (“In some cases, the computing device or apparatus may include a processor, microprocessor, microcomputer, or other component of a device that is configured to carry out the steps of processes 1200-1900. In some examples, the computing device or apparatus may include a camera configured to capture video data (e.g., a video sequence) including video frames” at paragraph 0188, line 5), the processing circuitry being configured to: determine a sampling region, the sampling region being based on an average including the bounding box and at least one previously determined bounding box (“Here, width.sub.hist and height.sub.hist correspond to, respectively, the average width and the average height of the historical output bounding box across the set of previous frames stored in buffer queue 1102, whereas width.sub.c, and height correspond to, respectively, the width and the height of the candidate bounding box (included in input attributes 1010). The value of width.sub.hist can be determined by averaging the sum of widths of the historical output bounding box for each of the set of previous frames, with M representing the number of the set of previous frames (e.g., eight) for which the historical attributes are stored in buffer queue 1102. Further, the value of height.sub.hist can be determined by averaging the sum of heights of the historical output bounding box for each of the set of previous frames, with M representing the number of the set of previous frames (e.g., eight) for which the historical attributes are stored in buffer queue 1102” at paragraph 0159, line 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize the bounding box averaging as taught by Zhou et al. to determine the ROI of Addison et al. “to reduce a rate of change these attributes of the output bounding region over a set of continuous frames” (Zhou et al. at paragraph 0012, second to last sentence) and “more accurate tracking of an object can be performed using the output bounding region” (Zhou et al. at paragraph 0012, last sentence). Regarding claim 12, Zhou et al. discloses a method wherein determining the sampling region comprises determining a running average based on the bounding box and a set number of immediately preceding previously determined bounding boxes (“Here, width.sub.hist and height.sub.hist correspond to, respectively, the average width and the average height of the historical output bounding box across the set of previous frames stored in buffer queue 1102, whereas width.sub.c, and height correspond to, respectively, the width and the height of the candidate bounding box (included in input attributes 1010). The value of width.sub.hist can be determined by averaging the sum of widths of the historical output bounding box for each of the set of previous frames, with M representing the number of the set of previous frames (e.g., eight) for which the historical attributes are stored in buffer queue 1102. Further, the value of height.sub.hist can be determined by averaging the sum of heights of the historical output bounding box for each of the set of previous frames, with M representing the number of the set of previous frames (e.g., eight) for which the historical attributes are stored in buffer queue 1102” at paragraph 0159, line 1). Claim(s) 2 and 3 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Addison et al. and Zhou et al. as applied to claim 1 above, and further in view of Mao et al. (US 2021/0365707). Regarding claim 2, the Addison et al. and Zhou et al. combination discloses a system wherein the processing circuitry is further configured to: determine a region of interest (ROI) of the image data, the ROI comprising a portion of a field of view of the at least one camera sensor (“In some embodiments, the system determines a skeleton outline of a patient to identify a point or points from which to extrapolate an ROI.” Addison et al. at paragraph 0084, line 1). While the Addison et al. and Zhou et al. combination teaches changing the definition of the bounding box (see paragraphs 0095-0118 of Addison et al. for example), the combination does not explicitly disclose determining that the bounding box extends beyond the ROI and based on the bounding box extending beyond the ROI, setting the bounding box to equal the ROI. Mao et al. teaches a system in the same field of endeavor of bounding box based object detection and tracking, wherein the bounding box is adjusted to accommodate for the size of the ROI (“As noted above, the ROI can be represented by a bounding box or other type of bounding region. In some cases, the ROI determination engine 804 can generate a bounding box for the ROI that fits to the boundaries of the object in the ROI.” at paragraph 0183, line 4). As such, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to cross check the ROI size in generation of the bounding box as taught by Mao et al. in the system of the Addison et al. and Zhou et al. combination to ensure that the bounding box does not include data outside of the area of interest, which would increase subsequent processing time and likelihood of tracking errors. Regarding claim 3, the Addison et al. and Zhou et al. combination discloses a system wherein the processing circuitry is further configured to: determine a region of interest (ROI) of the image data, the ROI comprising a portion of a field of view of the at least one camera sensor (“In some embodiments, the system determines a skeleton outline of a patient to identify a point or points from which to extrapolate an ROI.” Addison et al. at paragraph 0084, line 1). While the Addison et al. and Zhou et al. combination teaches changing the definition of the bounding box (see paragraphs 0095-0118 of Addison et al. for example), the combination does not explicitly disclose determining that the bounding box is less than a threshold size; and based on the bounding box being less than the threshold size, setting the bounding box to be equal to the ROI. Mao et al. teaches a system in the same field of endeavor of bounding box based object detection and tracking, wherein the bounding box is adjusted to accommodate for the size of the ROI (“As noted above, the ROI can be represented by a bounding box or other type of bounding region. In some cases, the ROI determination engine 804 can generate a bounding box for the ROI that fits to the boundaries of the object in the ROI.” at paragraph 0183, line 4). As such, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to cross check the ROI size in generation of the bounding box as taught by Mao et al. in the system of the Addison et al. and Zhou et al. combination to ensure that the bounding box does not exclude critical data inside of the area of interest. Claim(s) 8, 9, 13 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Addison et al. and Zhou et al. as applied to claim 1 above, and further in view of Yoo et al. (US 2020/0294257). Regarding claims 8 and 13, the Addison et al. and Zhou et al. combination discloses a system and method as described in claims 1 and 10 above. The Addison et al. and Zhou et al. combination does not explicitly disclose determining a plurality of bounding boxes, each of the bounding boxes determined using a different method and determining the bounding box based on the plurality of bounding boxes. Yoo et al. teaches a system and method wherein the processing circuitry is configured to: determine a plurality of bounding boxes (figure 1, numerals 116A and 116B), each of the bounding boxes determined using a different method (“For example, the ROI determiner 104 may leverage a low-resolution (LR) region proposal network (RPN) 112 (collectively referred to as an LR-RPN 112) to generate one or more ROIs 116A. The LR-RPN 112 may include a DNN trained to predict—from 2D data, such as image data—locations of the ROI(s) 116A” at paragraph 0026, line 7; “As another example, the ROI determiner 104 may leverage a depth-based region proposal network (RPN) 114 (collectively referred to as a D-RPN 114) to generate one or more ROIs 116B. The D-RPN 114 may include a DNN trained to predict—from 2D and/or 3D data, such as LIDAR data, RADAR data, and/or depth data from other depth sensor types—locations of the ROI(s) 116B” at paragraph 0030, line 1); and determine the bounding box based on the plurality of bounding boxes (“Where at least one ROI 116A and at least one ROI 116B overlap—e.g., beyond a certain threshold amount—filtering, weighting, and/or other post-processing may be performed to determine the final ROIs 118. For example, NMS may be used to determine the final ROIs 118 from the combined ROIs 116A and 116B, weighting may be used (e.g., to give more weight to the ROIs 116A than the ROIs 116B, or vice versa), and/or other methods may be performed, such as averaging” at paragraph 0036, 8). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize multi-RPNs as taught by Yoo et al. to determine the initial bounding boxes of the Addison et al. and Zhou et al. combination “By leveraging both 2D and 3D data, false positives and false negatives may be minimized” (Yoo et al. at paragraph 0006, last sentence). Regarding claims 9 and 14, the Addison et al., Zhou et al. and Yoo et al. combination discloses a system a method wherein as part of determining the bounding box based on the plurality of bounding boxes, the processing circuitry is configured to: set the bounding box to be equal to an average extent of each of the plurality of bounding boxes (“Here, width.sub.hist and height.sub.hist correspond to, respectively, the average width and the average height of the historical output bounding box across the set of previous frames stored in buffer queue 1102, whereas width.sub.c, and height correspond to, respectively, the width and the height of the candidate bounding box (included in input attributes 1010). The value of width.sub.hist can be determined by averaging the sum of widths of the historical output bounding box for each of the set of previous frames, with M representing the number of the set of previous frames (e.g., eight) for which the historical attributes are stored in buffer queue 1102. Further, the value of height.sub.hist can be determined by averaging the sum of heights of the historical output bounding box for each of the set of previous frames, with M representing the number of the set of previous frames (e.g., eight) for which the historical attributes are stored in buffer queue 1102” Zhou et al. at paragraph 0159, line 1); set the bounding box to be equal to a majority voted bounding box of the plurality of bounding boxes; set the bounding box to be equal to a majority voted bounding box of the plurality of bounding boxes that is further adjusted to be a largest or smallest bounding box of the majority voted bounding box; set the bounding box to be the largest bounding box of the plurality of bounding boxes; set the bounding box to be a region of overlap of all of the bounding boxes; set the bounding box to be a superset of the plurality of bounding boxes; or employ non-maximum-suppression to determine the bounding box. Allowable Subject Matter Claims 4, 6 and 11 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: the prior art does not disclose that the processing circuitry is configured to determine the average based on the equation boxsampling = q * maskbbox + (q−1) * boxprevioussampling, where boxsampling is the sampling region, maskbbox is the bounding box, q<1, and boxprevioussampling is a previously determined sampling region as required by claims 4 and 11; determining that a location of pixels included in at least one of a visualization mask, the visualization mask comprising the pixels within the image data associated with respiration, and the bounding box have moved more than a set distance from a previously obtained image data to the image data in less than a set time period, and based on the location of pixels of included in the at least one of the visualization mask and the bounding box moving more than a set distance from the previously obtained image data to the image data in less than a set time period, set the bounding box to equal the ROI, as required by claim 6. As previously explained above, the conjunctive nature of claim 6 requires both the pixels of the visualization mask and bounding box to satisfy the distance requirement. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATRINA R FUJITA whose telephone number is (571)270-1574. The examiner can normally be reached Monday - Friday 9:30-5:30 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at 5712723638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KATRINA R FUJITA/Primary Examiner, Art Unit 2672
Read full office action

Prosecution Timeline

Oct 10, 2023
Application Filed
Nov 06, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597250
DETECTION OF PLANT DETRIMENTS
2y 5m to grant Granted Apr 07, 2026
Patent 12582476
SYSTEMS FOR PLANNING AND PERFORMING BIOPSY PROCEDURES AND ASSOCIATED METHODS
2y 5m to grant Granted Mar 24, 2026
Patent 12585698
MULTIMEDIA FOCALIZATION
2y 5m to grant Granted Mar 24, 2026
Patent 12586190
SYSTEM AND METHOD OF CLASSIFICATION OF BIOLOGICAL PARTICLES
2y 5m to grant Granted Mar 24, 2026
Patent 12566341
PREDICTING SIZING AND/OR FITTING OF HEAD MOUNTED WEARABLE DEVICE
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
94%
With Interview (+24.0%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 674 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month