Prosecution Insights
Last updated: April 19, 2026
Application No. 18/689,373

SYSTEMS AND PROCESSES FOR DETECTION, SEGMENTATION, AND CLASSIFICATION OF POULTRY CARCASS PARTS AND DEFECTS

Non-Final OA §102§103
Filed
Mar 05, 2024
Examiner
CODRINGTON, SHANE WRENSFORD
Art Unit
2667
Tech Center
2600 — Communications
Assignee
BOARD OF TRUSTEES OF THE UNIVERSITY OF ARKANSAS
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
0%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
1 granted / 1 resolved
+38.0% vs TC avg
Minimal -100% lift
Without
With
+-100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
14 currently pending
Career history
15
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
60.5%
+20.5% vs TC avg
§102
23.7%
-16.3% vs TC avg
§112
7.9%
-32.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 10/08/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification The disclosure is objected to because of the following informalities: Spelling error , page 4 line 14. Reads "Showm" . Appropriate correction is required. Claim Objections Claims 2 and 3 are objected to because of the following informalities: same spelling error on both claims. Error: "ofi". Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1 and 2 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chao et al (Chao hereinafter US 8126213 B2). As per claim 1 Chao teaches acquiring one or more sets of frames or images from the video source of a plurality of broiler chicken carcasses after scalding, picking, and removal of head and feet in a processing plant; (Fig 3 label 32, Fig. 1C); automatically identifying one or more of the carcasses in the frames or images (Column 3 Line 66 “ process by which individual chicken carcasses can be detected”); detecting a potential defect or visual abnormality of one or more of the identified carcasses from the images (Column 3 line 1 “…object of the present invention is to provide a real-time automated inspection system…accurately identify wholesome and unwholesome chicken carcasses.) and if a potential defect is detected in step iii, routing the identified carcass to a reworking or discard operation. ( Column 3 Line 26 “A further object of the present invention is to provide an improved inspection process for detecting and for removing or diverting unwholesome birds from chicken processing lines.”) As per claim 2 Chao teaches acquiring one or more sets of frames or images of the plurality of broiler chicken carcasses against a dark color background. ( Column 6 Line 11 “ The hyperspectral/multispectral imaging system requires a means of providing a dark, non-reflective imaging background 3 such as, for example, a black non-reflective matte-surface acrylic or fabric panel.”) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 3-5 Are rejected under 35 U.S.C. 103 as being unpatentable over Chao et al (Chao hereinafter US 8126213 B2) in view of Li et al (Li hereinafter “Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with Transformers”) Chao teaches all the claim limitations presented in claim 2 . See claim 2’s 102 rejection. Chao further teaches cropping (“The hyperspectral images were analyzed to optimize the ROI size and location”) in a chicken processing line environment. The region of interest section (Columns 15 and 16) describes related processing steps used to define/locate relevant image regions. Chao does not explicitly segment or insert a bounding box around regions of interest. Li teaches segmenting cropping or both the images to a region of interest (Section 3.3.3 Mask decoder: “the mask decoder is proposed to predict the categories and masks according to the given queries.” Li further explains the that the masks created corresponds to a “meaningful region”, “the ground truth mask is exactly the meaningful region…” i.e. a ROI) and inserting a bounding box around each region of interest. ( “Location decoder refines Nth thing queries by detecting the bounding boxes of things…” Figure 2 label (c) also showcases this. ) Accordingly, a person of ordinary skill in the art would have been motivated to modify Chao’s workflow by representing Chao’s ROI as bounding box defined regions when implementing ROI based inspection with Li’s panoptic pipeline because Li’s architecture explicitly uses bounding boxes to localize “things” and supports downstream region/mask processing over those localized regions. This modification yields a deterministic machine usable ROI box that stabilizes subsequent region processing and constraining analysis to the localized carcass region and reduces background interference in Chao’s high speed inspection environment. As per claim 4 Chao and Li cover all the claim limitations presented in claim 3. See claim 3’s 103 rejection Chao teaches detecting and analyzing the shape of the carcass and the region of interest for designation of “carcass issues” (Column 16 line 54-60 “decision output values for Region of Interest pixels in individual line-scan images… average decision output values to identify chickens as being wholesome or unwholesome…”). Chao’s imaging strategy elucidates the carcass shape in regards to the region of interest ( Fig 8 and Column 10 line 66 “FIG. 8 shows a contour image of two examples of chicken carcasses with the SP and EP marked and connected by a line on each. The possible size and location of the ROI is described by parameters m and n, which extended below the SP-EP line…) Chao does not explicitly use a deep learning neural network for segmenting detecting or analyzing the region of interest. Li teaches detecting analyzing and segmenting shape using a deep learning neural network (Specifically, Li teaches a deep learning segmentation stack and mask decoder pipeline performing detection, classification and segmentation: Figure 2. “Panoptic SegFormer is composed of backbone, encoder and decoder” , Section 3.1 Overall architecture our architecture feeds an input image…to the backbone network and obtains the feature maps…” and section 3.3.3 “the mask decoder is proposed to predict the categories” Panoptic SegFormer uses DETR as a ) Accordingly, a person of ordinary skill in the art at the time this invention was effectively filed would have been motivated to further the Chao/Li pipeline by implementing Li’s mask prediction segmentation pipeline within the ROI bounding box region. Li’s decoder explicitly outputs masks corresponding to meaningful regions allowing for direct carcass shape delineation at the pixel level instead of having to rely solely on intensity thresholds or handcrafter region extraction . This improves the wholeness and stability of the modified workflow in regards to pose and or orientation, lighting and occlusion on a production line. This also yields improvements on defect localization within the ROI because decisions can be computed from the segmented carcass region rather than a mixed foreground/background pixels. As per claim 5 Chao and Li cover all the claim limitations presented in claim 4. See claim 4’s 103 rejection. Chao does not disclose creating a set of low-resolution feature maps from the region of interest using a convolutional neural network; ii. creating a feature pyramid network of the feature maps with varying resolutions; iii. creating a concatenated feature of feature maps from the feature pyramid network; and iv. creating scaled feature maps with identical sizes of the frames or images. Li teaches creating a set of low-resolution feature maps from the region of interest using a convolutional neural network (explicitly teaches feature maps at reduced resolutions from a CNN backbone; cites ResNet C5 as an example backbone feature stage. Methods section : “Our architecture feeds an input image X ∈ R H×W×3 to the backbone network, and obtains the feature maps C3, C4, and C5 from the last three stages, of which the resolutions are 1/8, 1/16 and 1/32 compared to the input image, respectively”, and then references low resolution CNN backbone features explicitly as ResNet C5 in Section 3.2 Transformer Encoder “…can only process low resolution feature maps (e.g., ResNet C5)…”), creating a feature pyramid network of the feature maps with varying resolutions (Figure 2 label (a) and Methods section: “backbone network…obtains the feature maps C3, C4, and C5 from the last three stages, of which the resolutions are 1/8, 1/16 and 1/32 compared to the input image, respectively”) creating a concatenated feature of feature maps from the feature pyramid network (Li shows concatenation across multi scale tokens/maps. Figure 2 label (a) with corresponding Section 3.1 Overall architecture “…using the concatenated feature tokens as input, the transformer encoder outputs the refined features of…”. Li’s multi scale mask path in the Mask Decoder section, “upample…and concatenate them along the channel dimension…Concat(.) is the concatenation operation” corresponds to figure B.3 “FPN style CNN” ) and creating scaled feature maps with identical sizes of the frames or images ( Li upsamples multi-scale maps to a common resolution concluding in a fused map at one scale: Section 3.3.3 Mask Decoder “we upsample these attention maps to the resolution of H/8×W/8 and concatenate them along the channel dimension” which corresponds to figure 2 label (d)) Accordingly, it would have been obvious to a person of ordinary skill in the art at the time this invention was effectively filed to further enhance the Chao/Li pipeline to include Li’s multi scale feature hierarchy (C3/C4/C5), encoder token concatenation ,common resolution upsample and concatenation fusion within Chao’s poultry inspection ROI. Li provides a solid foundation and mechanism to preserve high resolution features (small defects) while retaining low resolution features (global context) and then align them to a common resolution for a stable mask. This pipeline improves detection for both small defects and large anomalies without changing Chao’s production line. This fuses complementary scales into a single alignment representative used for segmentation and classification. Claim 6 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over by Chao et al (Chao hereinafter US 8126213 B2) in view of Jain et al (Jain hereinafter US 12165360 B1) As per claim 6 Chao teaches A system for detecting defects for broiler chicken carcasses, the system comprising: one or more video sources (Fig 1A and 1B), identify, using the processor, one or more of the carcasses in the frames or images(Column 16 line 48-Column 17 line 26“…to identify the presence and entrance of the leading edge of a chicken into the linear field of view… line-scan images to identify the exit of the trailing edge of a chicken from the linear field of view…The above method wherein real-time analysis of individual line-scan images to identify the presence…performed by using the following algorithm”); detecting, using the processor, a potential defect or visual abnormality of one or more of the identified carcasses from the images (Column 6 line 13 “development of algorithms 20 …result of such computer analysis is the generation of a qualitative analysis such as a "wholesome/unwholesome" determination for each chicken that passes in front of the means for obtaining spectral images. The developed algorithms 20 are implemented using commercial software 18 such as MATLAB…”); and routing the identified carcass to a reworking or discard operation if a potential defect is detected ( Column 3 Line 26 “A further object of the present invention is to provide an improved inspection process for detecting and for removing or diverting unwholesome birds from chicken processing lines.”) Chao does not teach a wireless interface; a data store; a processor communicatively coupled to the one or more video sources, the wireless interface, and the data store; and memory storing instructions that, when executed, cause the processor to: store, in the data store, one or more sets of frames or images from the video source of the broiler chickens on a processing line in a poultry processing plant Jain teaches a wireless interface( figure 1 label 130, Column 11 line 33 " The network 130 may include any wired network, wireless network, or combination thereof.” ) a data store (fig 1 label a 150, column 7 line 19 “ device may include on-device memory for storing images and analyses”) processor communicatively coupled to the one or more video sources, the wireless interface and the data store (In Figures 1-4. Jain shows the flow of processor executed image processing and communications via a communications interface to other devices/ servers. ), memory storing instructions that when executed, cause the processor to store in the data store one or more sets of frames or images from the video source ( Column 5 line 28 “ Computer readable storage medium having program instructions…executable by one or more processors” and in column 7 line 20 that the device has “on device memory for storing images” ) Accordingly, a person of ordinary skill in the art at the time this invention was effectively filed would have found it obvious to extend the previously modified Chao/Jain pipeline to include Jains concept of a system having a wireless communication interface and a server connectivity for offloading image/analysis data and supporting outputs and arrived at the limitations of claim 6. A person of ordinary skill in the art would have been motivated to make this modification because it yields a remote and centralized handling of inspection results while still performing Chao’s poultry line inspection and unwholesome chicken diversion function. The modification improves availability and visibility of inspection evidence without changing underlying poultry inspection principle. As per claim 7 Chao and Jain teach all claim limitations previously rejected in claim 6’s 103 rejection. Please see claim 6’s 103 rejection Jain further teaches detected defects hosted on a cloud-based server (“devices may be configured to automatically connect to a remote management server (e.g., a “cloud”-based management server), and may offload images and analyses to the remote management server via wired or wireless communications” In column 3 line 62, Jain’s “reporting and defect (and/or other characteristic) alerting” is naturally routed through this cloud-based server.) and defects provided through sending an email (Column 38 line 48 “ …alert data, sent by the machine vision device, may be…configured for delivery within an email.”), website log in, or a link directed to the detected defects. (Column 44 line 54 “functionality may be accessible by a user through a web-based viewer (such as a web browser” and Column 20 line 9 “human machine interface device 170 is directed to connect with the machine vision device 150 (e.g., via an IP address and, optionally, a particular port of the machine vision device 150, or a unique identifier” Accordingly, it would have been obvious to a person of ordinary skill in the art to have further modified the previously discussed Chao/Jain workflow to incorporate Jain’s concept so that detected defect results are hosted on a cloud-based server and delivered via email. Jain describes analytics with notifications that provide a practical operational benefit in production line inspections such as remote visibility and timely alerting to support quality control and response times. Jain gives Chao a secure remote and rapid dissemination of defect evidence and results through authenticated web access or direct links. Ultimately a person of ordinary skill in the art would have seen that centralization of detected defects in a cloud capable system and distributing alerts via email enhances responsiveness oversight, and capitalizes on traceability which in turn increases quality and productivity. Claims 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over by Chao et al (Chao hereinafter US 8126213 B2) in view Jain et al (Jain hereinafter US 12165360 B1) in further view of Li et al (Li hereinafter Panoptic SegFormer: Delving Deeper into Panoptic Segmentation with Transformers) Chao and Jain cover all claim limitations previously presented in claim 6. See claim 6’s 103 rejection. Chao nor Jain teach using deep learning neural network to detect, analyze, and segment the shape of the carcass in a region of interest in a bounding box inserted on the images. Li teaches a deep learning neural network (Introduction “ we propose Panoptic SegFormer, a concise and effective framework for panoptic segmentation with transformers.”) to detect ( 3.1 Overall Architecture “by detecting the bounding boxes of things to capture location information”), analyze (3.1 Overall architecture “The mask decoder …predicts mask and category at each layer” this analyzes contents by predicting categories together with masks) and segment the shape (“the mask decoder…predicts mask…”) in a region of interest in a bounding box inserted on the image ( 3.1. Overall Architecture : “location decoder…detecting the bounding boxes” and “The mask decoder then takes both things and stuff queries as input and predicts mask and category at each layer” Note that a predicted bounding box defines the localized region used for downstream mask prediction and is also the same “box” that can be rendered/inserted as an overlay on the image) effectively showing a deep learning neural network that detects bounding boxes and analyzes the segments by predicting mask and category using a mask decoder. Accordingly, it would have been obvious to a person of ordinary skill in the art at the time this invention was effectively filed to modify the Chao/Jain work flow incorporate Li’s concept of a deep learning neural network that detects bounding boxes and analyzes the segments by predicting mask and category using a mask decoder. A person of ordinary skill in the art would have been motivated to make this further modification because Li’s bounding box localization provides explicit spatial targeting for subsequent mask based segmentation, while Li’s mask and categorization prediction provides suitable outputs that improve the reliability and interpretation of automated inspection decisions in high throughput vision systems without changing the foundation for acquisition and routing in the previous Chao/Jain pipeline. As per claim 9 Chao and Jain cover all claim limitations previously presented in claim 6. See claim 6’s 103 rejection. Chao nor Jain teach instructions comprise a backbone or image input module, a pixel decoder module, a multi-scale transformer encoder, and a mask-attention transformer decoder module. Li teaches a backbone/image input module (Figure 2 label (a)), a multi scale transformer encoder (Figure 2 label (b)), a mask-attention transformer decoder (Figure 2 labels (c) and (d) as well as corresponding passage in 3.3.3 Mask decoder “queries Q…The Keys K and values V…fetch the attention map A…) and a functional pixel decoder module ( Li describes the backbone/encoder producing and refining multi scale spatial feature maps used for mask prediction. Figure 2 “The backbone and the encoder output and refine multi-scale features.” Accordingly, it would have been obvious to a person of ordinary skill in the art at the time this invention was effectively filed to further modify the Chao/Jain workflow to incorporate Li’s concept of using backbone, multi scale transformer encoder, mask attention transformer decoder stack and pixel decoder functionality. Li’s architecture is designed to refine multi scale features in the encoder and to output mask-based region decisions in the decoder. This matches the functional need in Chao for high confidence localization analysis at line speeds. Li’s encoder driven refinement of multi scale features provides stronger invariance to scale changes and viewpoint variability in carcass presentation. This gives more consistent segmentation and analysis outputs usable for automated routing decisions downstream in Chao’s process control setting As per claim 10 Chao, Jain and Li cover all claim limitations previously presented in claim 6. See claim 6’s 103 rejection. Li teaches creation of a set of low-resolution feature maps from the region of interest using a convolutional neural network of the image input module (Section 3.1 Overall Architecture “…feeds an input image…to the backbone network and obtains the feature maps C3, C4 and C5…resolutions are 1/8, 1/16 and 1/32”), create a feature pyramid network of the feature maps with varying resolutions using the pixel decoder module (Li teaches a multi scale pyramid produced and refined by backbone encoder “ 3.1 Overall architecture “feature maps C3, C4, and C5 from the last three stages, of which the resolutions are 1/8, 1/16 and 1/32 compared to the input image” and in the same section “transformer encoder is applied to refine the multi-scale feature maps given by the backbone”) create a concatenated feature of feature maps from the feature pyramid network using the multi-scale transformer encoder (Figure 2 label (b) supported by “using the concatenated feature tokens as input, the transformer encoder outputs the refined features…” in Section 3.1 Overall architecture) and create scaled feature maps with identical sizes of the frames or images using the mask-attention transformer decoder module.(Figure 2 labels (c) and (d) supported in Section 3.3.3Mask Decoder “upsample these attention maps to the resolution of H/8×W/8 and concatenate them along the channel dimension”) Accordingly a person of ordinary skill in the art at the time this invention was effectively filed would have been motivated to further enhance the modified Chao/Jain/Li workflow to include Li’s concept of an end to end, internally consistent mechanism to generate multi resolution feature maps, concatenate features/tokens for encoder refinement and upsample resolution attention output to a single identical resolution before concatenation yielding the “scaled to identical size” representation needed for stable mask/region computation in the Chao/Jain/Li inspection pipeline. This enhancement avoids resolution mismatching in regards to artifacts and improves mask stability by forcing alignment to a single representation prior to fusion. This reduces false positives and false negatives associated with inconsistent scale handling in high throughput poultry inspection imagery environment. Claims 11 is rejected under 35 U.S.C. 103 as being unpatentable over by Chao et al (Chao hereinafter US 8126213 B2) in view of Karolewski et all (Karolewski hereinafter EP 3731131 A1 “APPARATUS FOR ASSISTED COOKING” ) in further view of Kobres et al (Kobres hereinafter “US 9033227 B2) As stated in previous rejections, Chao teaches an automated high speed line handling setting poultry setting which include a computer system that receives images from a camera. Chao also teaches explicit classification and output for process control “multispectral inspection to detect and classify wholesome and unwholesome chicken carcasses…” , “(FIG. 4, Boxes 4.3 through 4.8), using the following algorithm to classify the bird carcass. For each line-scan image, fuzzy logic membership functions were used to produce two decision outputs for each non-background pixel in the line-scan image” ) Chao does not teach using a scale with a display, reading display with a digit recognizer, and receiving a direct weight output from the scale for cross verification Karolewski teaches automatically placing the parts on a scale (Description: “positioning the foodstuff in the field of view of the camera comprises the step of positioning the foodstuff on the scale, and determining the weight of the foodstuff using the scale.”) a scale with display ( Description paragraph [0011], “electronic scales with an LCD display for presenting visually the weighing result to the user. The apparatus can recognize the content displayed on the display of the scale to determine the weight of the foodstuff.” ) and data connection with the computer system (Paragraph [0012]…scale to the communication module via a Bluetooth connection”). Also note that Karolewski specifies (“…further aspect the invention may comprise a computer program product comprising instructions which, when the program is executed by a computer, cause the computer to carry out the steps of the method according to the invention.”); obtaining images from a camera of the processed parts on the scale ( “…a camera…for recording an image…positioning a scale in the field view of the camera…positioning the foodstuff on the scale…”, obtaining images from the camera of the scale and the display on the scale (Paragraph [0010] “recognizing a display of a scale within the image” and Paragraph [0013] “position a scale in the field of view of the camera such that its display can be recognized”), using a computer implemented digit recognizer module to determine the weight indicated on the scale while processed parts are on the scale (Paragraph [0010] “processor is further adapted to determine the weight of the foodstuff by recognizing a display of a scale within the image and by recognizing a content displayed on the display of the scale to determine the weight of the foodstuff, wherein the content is preferably recognized using OCR.”) Chao nor Karolewski uses a “time series analysis” to verify that a weight measurement output directly from the scale to the computer system matches the weight displayed on the scale while the classified poultry parts are being weighed. Kobre teaches applying software routines that analyze a scale output signals, verify them and compare measurements across repeated observation and or series (Column 1 and 2 lines 65-1 “software routine…analyzes the signal output from the security weight scale” and Column 2 line 25 “…continuously updates with each new accepted observation weight “ Column 2 line 34 ”can learn the weights…from a series of weightings…” This supplies the time series, repeated observation verification concept. Accordingly it would have been obvious to one of ordinary skill in the art at the time this invention was effectively filed to have modified Chao (directed to automated poultry image acquisition and classification for process control on a high speed processing line), with Karolewski (directed towards determining weight from a scale display via OCR and obtaining weight via wireless data connection from the scale) and further with Kobre (directed to software verification and comparison of a scale output using a repeated time series analysis) and arrived at the claimed weighing and classification process with cross checking that the transmitted weight matches the displayed weight. Kobre teaches improving reliability by analyzing and verifying weight scale signal outputs via comparison logic over repeated observations by the system. This directly supports the advantage of detecting mismatches/measurement errors in a high throughput poultry workflow Chao already targets reducing variability and improving efficiency. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHANE WRENSFORD CODRINGTON whose telephone number is (571)272-8130. The examiner can normally be reached 8:00am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571) 272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHANE WRENSFORD CODRINGTON/ Examiner, Art Unit 2667 /TOM Y LU/ Primary Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Mar 05, 2024
Application Filed
Jan 09, 2026
Non-Final Rejection — §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
0%
With Interview (-100.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month