Prosecution Insights
Last updated: April 19, 2026
Application No. 18/257,572

System and method of assisted or automated unload synchronization

Final Rejection §103
Filed
Jun 14, 2023
Examiner
HEFLIN, HARRISON JAMES RIEL
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Agco International GmbH
OA Round
4 (Final)
73%
Grant Probability
Favorable
5-6
OA Rounds
2y 9m
To Grant
86%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
101 granted / 139 resolved
+20.7% vs TC avg
Moderate +13% lift
Without
With
+13.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
22 currently pending
Career history
161
Total Applications
across all art units

Statute-Specific Performance

§101
13.2%
-26.8% vs TC avg
§103
47.7%
+7.7% vs TC avg
§102
20.2%
-19.8% vs TC avg
§112
15.4%
-24.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 139 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments, see the section titled “35 U.S.C. 103(a) Obviousness Rejections” starting on page 5 of the reply filed 01/07/2026, have been fully considered but they are not persuasive. Applicant argues that the Bonefas, in view of Faust and Liu, does not explicitly disclose the contents of amended claim 1, specifically determining a location of the receiving vehicle relative to the agricultural harvester, wherein the determined location of the receiving vehicle relative to the agricultural harvester comprises at least one pair of coordinates. However, the Examiner disagrees. For example, in paragraphs [0087-0092], Bonefas discloses computing the relative location X.sub.1 of the forage harvester 10 and the transport vehicle 12 based on their GPS coordinates from their respective GPS units, or position-determining devices 72 and 76, and computing the relative location X.sub.2 of the first edge 19A of the container 16 to the transport vehicle 12 from the GPS coordinates and the 3D stereo measurements of the salient features of the front edge 19A of the container 18. The Examiner understands that GPS coordinates used in addition to the 3D stereo measurements comprise at least one pair of coordinates. See also paragraph [0056] of Faust where Faust teaches that the inputs can also include a receiving vehicle position signal 408 indicating a geographic position of receiving vehicle 256 in a coordinate system and/or a harvesting machine position signal 410 indicating a position of harvesting machine 200 in the coordinate system. Therefore, it is the Examiner’s opinion that Bonefas, in view of Faust and Liu, discloses the newly amended subject matter as claimed. See the rejections below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1 and 3-9 are rejected under 35 U.S.C. 103 as being unpatentable over Bonefas (US 2017/0049053 A1), in view of Faust (US 2022/0095539 A1) and Liu (US 2019/0103026 A1). Regarding claim 1, Bonefas discloses a system comprising: an agricultural harvester (In paragraph [0028], Bonefas discloses a self-propelled harvesting machine 10) comprising: a crop processor for reducing crop material to processed crop (In paragraph [0029], Bonefas discloses that the harvesting machine 10 includes a harvesting attachment 28, in the form of a corn header attachment, which is affixed to an entry channel 30 on the front side 10A of the forage harvester 10, where crop plants 58 harvested from a field 34 by way of the harvesting attachment 28 are conveyed to a cutter head 36 via a gathering conveyor with pre-compression rollers located in the entry channel 30, and the cutter head 36 acts in this embodiment as a crop processing unit for processing the crop plants 58 received from the harvesting attachment 28, and hence chops them into small pieces and delivers them to a discharge accelerator 38); an unloading conveyor for transferring a stream of processed crop out of the agricultural harvester (In paragraph [0030], Bonefas discloses that the crops discharged from the discharge accelerator 38 exit the harvesting machine 10 to the container 18 of a transport vehicle 12 via an adjustable transfer device 40 in the form of a discharge spout 45); and a camera for capturing images of an area proximate the agricultural harvester and generating image data (In paragraph [0043], Bonefas discloses that the harvesting machine 10 includes an optical image capture device 136, which is placed more or less in the middle of the adjustable transfer device 40 on its left or right or underside 40A (FIG. 1A), and during the harvesting operation, is aligned on the container 18 and is preferably implemented as a stereo-camera having two lenses 137 and two image sensors (not shown) arranged one above the other or side by side); and a controller (In paragraph [0036], Bonefas discloses that the harvesting machine 10 includes an electronic control unit 112 including a processor and memory) comprising: at least one processor (In paragraph [0036], Bonefas discloses that the harvesting machine 10 includes an electronic control unit 112 including a processor and memory); and at least one non-transitory computer-readable storage medium storing instructions thereon (In paragraph [0036], Bonefas discloses that the harvesting machine 10 includes an electronic control unit 112 including a processor and memory) that, when executed by the at least one processor, cause the controller to: receive the image data from the camera during a harvest operation (In paragraph [0043], Bonefas discloses that the electronic control unit 112 receives the signals from the optical image capture device 136 via an image processing system 138 that processes the image signals from a signal output of the optical image capture device 136 in order to extract the position of features of the container 18 of transport vehicle 12 within the field of view 135 of the optical image capture device 136); identify, using only image data from the camera, a receiving vehicle from among a plurality of possible different receiving vehicles (In paragraph [0043], Bonefas discloses that the electronic control unit 112 receives the signals from the optical image capture device 136 via an image processing system 138 that processes the image signals from a signal output of the optical image capture device 136 in order to extract the position of features of the container 18 of transport vehicle 12 within the field of view 135 of the optical image capture device 136); determine location and a size of the identified receiving vehicle in the image data (In paragraph [0048], Bonefas discloses that the distance between the discharge spout 45 of harvesting machine 10 (or the machine 10 itself, e.g. the rotation point of the discharge spout 45 around the vertical axis) and the front edge 19A of the container 18 can be derived from the signal of the image processing system 138 since the optical image capture device 136 is a stereo camera, or if the optical image capturing device 136 were a monocular camera, the size (pixels) of the near edge of the container 18 in the image could be used as an estimate for the mentioned distance; in paragraph [0115], Bonefas discloses that once a target is detected, a target tracking module 402 determines the pose trajectory of the target, such as the receiving vehicle 12 or the container 18, and in one embodiment, the tracking module computes trajectory with four degrees of freedom (3D position plus heading), together with the dimensions of the target (length, width and height), using the input data from the image capture device 136; see also paragraph [0046] where Bonefas discloses that since the optical image capture device 136 is a stereo camera, its signals allow to estimate a distance between the harvesting machine 10 and the container 18 and the height of the upper edges 19 of the container 18 over ground); determine, using the location and the size of the receiving vehicle in the image data, a location of the receiving vehicle relative to the agricultural harvester, wherein the determined location of the receiving vehicle relative to the agricultural harvester comprises at least one pair of coordinates (In paragraph [0048], Bonefas discloses that the distance between the discharge spout 45 of harvesting machine 10 (or the machine 10 itself, e.g. the rotation point of the discharge spout 45 around the vertical axis) and the front edge 19A of the container 18 can be derived from the signal of the image processing system 138 since the optical image capture device 136 is a stereo camera, or if the optical image capturing device 136 were a monocular camera, the size (pixels) of the near edge of the container 18 in the image could be used as an estimate for the mentioned distance; in paragraph [0046], Bonefas discloses that the electronic control unit 112 controls actuators 46, 48, 52 according to the signal from the optical image capture device 136, processed by image processing system 138, wherein in the image from the optical image capture device 136, features are identified, for example the upper edge 19 of the container 18 (FIG. 1B), and the actuators 46, 48, 52 are controlled such that the crop flow expelled by the adjustable transfer device 40, hits the interior of the container 18, and a feedback for the impact point of the crop plants 58 on the container 18 can be derived from the image signal from the optical image capture device 136, and further, since the optical image capture device 136 is a stereo camera, its signals allow to estimate a distance between the harvesting machine 10 and the container 18 and the height of the upper edges 19 of the container 18 over ground, such that the actuators 46, 48 and 52 can be controlled according to a known kinematic model of the free crop flow downstream the adjustable transfer device 40; see also paragraphs [0087-0092] where Bonefas discloses computing the relative location X.sub.1 of the forage harvester 10 and the transport vehicle 12 based on their GPS coordinates from their respective GPS units, or position-determining devices 72 and 76, and computing the relative location X.sub.2 of the first edge 19A of the container 16 to the transport vehicle 12 from the GPS coordinates and the 3D stereo measurements of the salient features of the front edge 19A of the container 18); and generate automated navigation data based on the location of the receiving vehicle, the automated navigation data to automatically control operation of at least one of the agricultural harvester and the receiving vehicle to align the unloading conveyor with the receiving vehicle (In paragraph [0046], Bonefas discloses that the electronic control unit 112 controls actuators 46, 48, 52 according to the signal from the optical image capture device 136, processed by image processing system 138, wherein in the image from the optical image capture device 136, features are identified, for example the upper edge 19 of the container 18 (FIG. 1B), and the actuators 46, 48, 52 are controlled such that the crop flow expelled by the adjustable transfer device 40, hits the interior of the container 18, and a feedback for the impact point of the crop plants 58 on the container 18 can be derived from the image signal from the optical image capture device 136, and further, since the optical image capture device 136 is a stereo camera, its signals allow to estimate a distance between the harvesting machine 10 and the container 18 and the height of the upper edges 19 of the container 18 over ground, such that the actuators 46, 48 and 52 can be controlled according to a known kinematic model of the free crop flow downstream the adjustable transfer device 40). Although in paragraph [0110] Bonefas discloses that a “lazy evaluation” technique is commonly used to optimize machine learning pipelines and avoids the computation of extra features when an input windows is clearly classified as part of the background after only evaluating the first feature channel, Bonefas does not explicitly disclose applying a machine learning model to identify, using only image data from the camera, a receiving vehicle from among a plurality of possible different receiving vehicles; and determining, using the location and the size of the receiving vehicle in the image data, a location of the receiving vehicle relative to the agricultural harvester by comparing known dimensions of the receiving vehicle with the size of the receiving vehicle in the image data. However, Faust teaches applying a machine learning model to identify, using only image data from the camera, a receiving vehicle from among a plurality of possible different receiving vehicles (In paragraphs [0027-0028], Faust teaches that a camera 106 can be positioned to have a field of view that captures an image of the side portion 118 of trailer 116, and thus, the visual or optical features of the side portion of trailer 116 can be used to uniquely identify trailer 116, or at least to identify the type of the trailer 116, wherein the visual features can be detected using a computer vision analysis system, using a deep neural network, or using other image processing techniques and mechanisms for identifying visual features or characteristics in an image, a set of images, or a video). Faust is considered to be analogous to the claimed invention in that they both pertain to the use of a machine learning model to identify a particular receiving vehicle for the receipt of material being transferred from a harvester. It would be obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Faust with the system as disclosed by Bonefas, where the use of machine-learning for image recognition is well understood in the art, and may be implemented without undue experimentation, and with a reasonable expectation of success and with predictable results. Doing so may be advantageous in that “Based on the unique trailer identifier or the type identifier, the settings values for the automatic cart filling control system can be obtained so that the cart is filled in a cart-specific way or in a cart type-specific way, depending upon whether the cart is uniquely identified or the cart type is identified” as suggested by Faust in paragraph [0027], thereby increasing the contextual sensitivity and accuracy of operation by the system, for example. The combination of Bonefas and Faust does not explicitly disclose determining, using the location and the size of the receiving vehicle in the image data, a location of the receiving vehicle relative to the agricultural harvester by comparing known dimensions of the receiving vehicle with the size of the receiving vehicle in the image data. However, Liu teaches determining, using the location and the size of the receiving vehicle in the image data, a location of the receiving vehicle relative to the agricultural harvester by comparing known dimensions of the receiving vehicle with the size of the receiving vehicle in the image data (In paragraph [0043], Liu teaches depth estimation according to an embodiment, where the tracker 330 uses a pinhole camera model to estimate the depth “Z” (distance) from a vehicle at point C to another object, where “f” represents the focal length of an image sensor of a client device 110 with the vehicle 140 and the other object may be another vehicle having a width “Y” (e.g., approximately 1.8 meters), and in some embodiments, the tracker 330 may use a lookup table to determine the expected width “Y,” height, or aspect ratio of a vehicle based on a type of the vehicle, and based on trigonometry, the tracker 330 may output a bounding box at point “p” having a width y=f*Y/Z, and thus, the tracker 330 determines the depth to be Z=f*Y/y). Liu is considered to be analogous to the claimed invention in that they both pertain to utilizing the scale of a vehicle in a captured image with its known dimensions in order to determine its relative location. It would be obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Liu with the system as disclosed by the combination of Bonefas and Faust, where the Examiner understands that the taught image recognition technique is well understood in the art, and may be implemented without undue experimentation, and with a reasonable expectation of success and with predictable results. Doing so may be advantageous in that the detection of the vehicle and subsequent determination of location may be performed utilizing already known information such as its expected width, increasing efficiency of the calculation, for example. Regarding claim 3, Faust further teaches the machine learning model comprising a deep learning model (In paragraphs [0027-0028], Faust teaches that a camera 106 can be positioned to have a field of view that captures an image of the side portion 118 of trailer 116, and thus, the visual or optical features of the side portion of trailer 116 can be used to uniquely identify trailer 116, or at least to identify the type of the trailer 116, wherein the visual features can be detected using a computer vision analysis system, using a deep neural network, or using other image processing techniques and mechanisms for identifying visual features or characteristics in an image, a set of images, or a video). Regarding claim 4, Bonefas further discloses the agricultural harvester further comprising an electromagnetic detecting and ranging module (In paragraph [0043], Bonefas discloses that the harvesting machine 10 includes an optical image capture device 136, which is placed more or less in the middle of the adjustable transfer device 40 on its left or right or underside 40A (FIG. 1A), and during the harvesting operation, is aligned on the container 18 and is preferably implemented as a stereo-camera having two lenses 137 and two image sensors (not shown) arranged one above the other or side by side), wherein the controller further comprises instructions that, when executed by the at least one processor, cause the controller to: receive data from the electromagnetic detecting and ranging module (In paragraph [0043], Bonefas discloses that the electronic control unit 112 receives the signals from the optical image capture device 136 via an image processing system 138 that processes the image signals from a signal output of the optical image capture device 136 in order to extract the position of features of the container 18 of transport vehicle 12 within the field of view 135 of the optical image capture device 136); and generate the automated navigation data based on the location of the receiving vehicle and the data from the electromagnetic detecting and ranging module (In paragraph [0046], Bonefas discloses that the electronic control unit 112 controls actuators 46, 48, 52 according to the signal from the optical image capture device 136, processed by image processing system 138, wherein in the image from the optical image capture device 136, features are identified, for example the upper edge 19 of the container 18 (FIG. 1B), and the actuators 46, 48, 52 are controlled such that the crop flow expelled by the adjustable transfer device 40, hits the interior of the container 18, and a feedback for the impact point of the crop plants 58 on the container 18 can be derived from the image signal from the optical image capture device 136, and further, since the optical image capture device 136 is a stereo camera, its signals allow to estimate a distance between the harvesting machine 10 and the container 18 and the height of the upper edges 19 of the container 18 over ground, such that the actuators 46, 48 and 52 can be controlled according to a known kinematic model of the free crop flow downstream the adjustable transfer device 40). Faust further teaches wherein the electromagnetic detecting and ranging module is for detecting at least one of a fill level and a distribution of grain in the receiving vehicle (In paragraphs [0022], [0033], and [0038], Faust teaches that forage harvester 100 includes an automatic cart filling control system (described in greater deal below) that uses a camera 106 mounted on the spout 108, which captures an image of the receiving area 112 of cart 102, wherein the harvesting machine may include sensors 206 such as a LIDAR (light detection and ranging) sensor 228 and/or a RADAR sensor 23 that may be alternatively utilized to perform the same function as optical sensor(s) 220 that capture stereo images that can be processed to identify a distance of receiving vehicle 256 from harvesting machine 200; in paragraphs [0018], [0023], and [0026], Faust teaches that the sensing system can gauge the height of harvested material in cart 102, and the location of that material, and thus automatically control the position of spout 108 to direct the trajectory of material 110 into the receiving area 112 of cart 102 to obtain an even fill throughout the entire length of cart 102, while not overfilling cart 102), and wherein the received data from the electromagnetic detecting and ranging module indicates the at least one of the fill level and the distribution of grain in the grain bin of the receiving vehicle (In paragraphs [0018], [0023], and [0026], Faust teaches that the sensing system can gauge the height of harvested material in cart 102, and the location of that material, and thus automatically control the position of spout 108 to direct the trajectory of material 110 into the receiving area 112 of cart 102 to obtain an even fill throughout the entire length of cart 102, while not overfilling cart 102). It would be obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to further implement these teachings of Faust, where doing so allows the harvester to “automatically aim the spout toward empty spots and control the flap position to achieve a more even fill, while reducing spillage” in the receiving vehicle as suggested by Faust in paragraph [0018], which advantageously improves the efficiency in storing and transporting the material by the receiving vehicle, for example. Regarding claim 5, Bonefas further discloses wherein the electromagnetic detecting and ranging module is configured to perform a two-dimensional scan (In paragraph [0043], Bonefas discloses that the harvesting machine 10 includes an optical image capture device 136, which is placed more or less in the middle of the adjustable transfer device 40 on its left or right or underside 40A (FIG. 1A), and during the harvesting operation, is aligned on the container 18 and is preferably implemented as a stereo-camera having two lenses 137 and two image sensors (not shown) arranged one above the other or side by side; see also paragraphs [0079-0080] and [0086-0087], Bonefas discloses that a stereo camera captures the salient features from the video and 3D data near the front edge 19A of the container 18 and use it as the tracking template; the Examiner understands that even in a case where the stereo camera is used to obtain 3D data, the 3D scan includes at least scans in lower number of dimensions, including at least a two-dimensional scan). Regarding claim 6, Bonefas further discloses wherein the electromagnetic detecting and ranging module is configured to perform a three-dimensional scan (In paragraph [0043], Bonefas discloses that the harvesting machine 10 includes an optical image capture device 136, which is placed more or less in the middle of the adjustable transfer device 40 on its left or right or underside 40A (FIG. 1A), and during the harvesting operation, is aligned on the container 18 and is preferably implemented as a stereo-camera having two lenses 137 and two image sensors (not shown) arranged one above the other or side by side; in paragraphs [0079-0080] and [0086-0087], Bonefas discloses that a stereo camera captures the salient features from the video and 3D data near the front edge 19A of the container 18 and use it as the tracking template). Regarding claim 7, Faust further teaches wherein the electromagnetic detecting and ranging module comprises a light detecting and ranging (LiDAR) module (In paragraphs [0033] and [0038], Faust teaches wherein the harvesting machine may include sensors 206 such as a LIDAR (light detection and ranging) sensor 228 and/or a RADAR sensor 23 that may be alternatively utilized to perform the same function as optical sensor(s) 220 that capture stereo images that can be processed to identify a distance of receiving vehicle 256 from harvesting machine 200). Regarding claim 8, Faust further teaches wherein the electromagnetic detecting and ranging module comprises a radio detecting and ranging (RADAR) module (In paragraphs [0033] and [0038], Faust teaches wherein the harvesting machine may include sensors 206 such as a LIDAR (light detection and ranging) sensor 228 and/or a RADAR sensor 23 that may be alternatively utilized to perform the same function as optical sensor(s) 220 that capture stereo images that can be processed to identify a distance of receiving vehicle 256 from harvesting machine 200). Regarding claim 9, Bonefas discloses wherein the electromagnetic detecting and ranging module is mounted at an end of the unloading conveyor distal a body of the agricultural harvester (In paragraph [0043], Bonefas discloses that the harvesting machine 10 includes an optical image capture device 136, which is placed more or less in the middle of the adjustable transfer device 40 on its left or right or underside 40A (FIG. 1A)). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Herman (US 2016/0183463 A1) teaches a control arrangement and method for controlling a position of a transfer device of a harvesting machine. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Harrison Heflin whose telephone number is (571)272-5629. The examiner can normally be reached Monday - Friday, 1:00PM - 10:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hunter Lonsberry can be reached at 571-272-7298. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HARRISON HEFLIN/ Examiner, Art Unit 3665 /HUNTER B LONSBERRY/ Supervisory Patent Examiner, Art Unit 3665
Read full office action

Prosecution Timeline

Jun 14, 2023
Application Filed
Mar 07, 2025
Non-Final Rejection — §103
Jun 09, 2025
Response Filed
Jun 18, 2025
Final Rejection — §103
Aug 25, 2025
Response after Non-Final Action
Sep 12, 2025
Request for Continued Examination
Oct 01, 2025
Response after Non-Final Action
Oct 15, 2025
Non-Final Rejection — §103
Jan 07, 2026
Response Filed
Feb 10, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596369
CONTROL SYSTEM, MOBILE OBJECT, CONTROL METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12566443
ROBOT TRAVELING IN SPECIFIC SPACE AND CONTROL METHOD THEREOF
2y 5m to grant Granted Mar 03, 2026
Patent 12559894
SYSTEMS AND METHODS TO APPLY SURFACE TREATMENTS
2y 5m to grant Granted Feb 24, 2026
Patent 12541202
UNMANNED VEHICLE AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Feb 03, 2026
Patent 12497275
APPARATUS FOR MOVING A PAYLOAD
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
73%
Grant Probability
86%
With Interview (+13.0%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 139 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month