Prosecution Insights
Last updated: April 19, 2026
Application No. 16/510,771

OBJECT IDENTIFICATION AND COLLECTION SYSTEM AND METHOD

Final Rejection §103
Filed
Jul 12, 2019
Examiner
SANTOS, AARRON EDUARDO
Art Unit
3663
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Terraclear Inc.
OA Round
8 (Final)
45%
Grant Probability
Moderate
9-10
OA Rounds
3y 4m
To Grant
58%
With Interview

Examiner Intelligence

Grants 45% of resolved cases
45%
Career Allow Rate
59 granted / 131 resolved
-7.0% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
63 currently pending
Career history
194
Total Applications
across all art units

Statute-Specific Performance

§101
12.0%
-28.0% vs TC avg
§103
58.6%
+18.6% vs TC avg
§102
5.3%
-34.7% vs TC avg
§112
21.5%
-18.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 131 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Claims 1 and 13 have been amended. No new claims have been introduced. No claims have been cancelled. Claims 1-20 are currently pending. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1 and 3-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wilkins (US 10796275 B1) in view of Fevold (US 20190294914 A1) and in further view of Gielis (US 20200323140 A1). REGARDING CLAIM 1, Wilkins discloses, employing an aerial image-collection vehicle to capture a first plurality of images of a target geographical area (Wilkins: the UAV 100 can locate the products that meet the criteria in the order (e.g., size, quantity, ripeness, color, type, etc.) … can use the sensor package 108 to assess one or more products on a particular tree 212 (Col. 9, Ln. 13-27)); determining object information for each of the one or more identified objects that were identified using the first artificial neural network (Wilkins: the UAV 100 can locate the products that meet the criteria in the order (e.g., size, quantity, ripeness, color, type, etc.) … can use the sensor package 108 to assess one or more products on a particular tree 212 (Col. 9, Ln. 13-27)); after the remotely operable image-collection vehicle has completed scanning the target geographical area and processing the first plurality of images (Wilkins: the UAV 100 can locate the products that meet the criteria in the order (e.g., size, quantity, ripeness, color, type, etc.) ... the location provided in the order provides the starting point, but some variability still exists even with a particular quadrant ... some, but not all, of the apples in a particular quadrant … (Col. 9, Ln. 13-27); (Col. 9, Ln. 28-37)), guiding a ground-based object-collection system over the target geographical area toward the one or more identified objects based on the object information from the first plurality of images (Wilkins: (Col. 9, Ln. 28-37); drop it, or lower it, to the ground for retrieval by the transporter 502 (Col. 10, Ln. 20-23); central control 204 may tally a plurality of updated locations from the UAV(s) 100 until enough products are reported in a particular area (e.g. a quadrant 214 or sector 216) and then dispatch the harvester 602 to harvest a bulk amount of the product all at once (Col. 14, Ln. 37-50); ... the transporter 502 can also have its own sensor package 520 to enable it to locate the product and/or navigate (Col. 10, Ln. 58-65)). Wilkins is replete with recitations of examining/inspecting (scanning) produce and providing location information to a UGV, thus disclosing scanning and guiding. If an object is inspected (imaging device) by a first vehicle (first image stream), and based upon inspection the UGV is guided to a location for a specific task, that is explicitly "guiding a ground-based object- collection system over the target geographical area toward the one or more identified objects based on the object information from the first plurality of images". In this case, "tracking" is interpreted as locating products across images. Wilkins discloses an automated ground machine for location and picking/grasping specified objects (see at least figures 5a and 6). Wilkins does not explicitly disclose, identifying one or more objects on a ground in the first plurality of images using a first artificial neural network; capturing a second plurality of images of the ground from the ground-based object-collection system while the object-collection system is in movement being guided toward the one or more identified objects on the ground; identifying a target object on the ground in the second plurality of images based on a second dataset of trained object parameters in a second artificial neural network while the object- collection system is in movement; determining, using image-based information, when the target object is at a position in at least one of the plurality of images to trigger pick up the target object using three or more paddles, in response to determining, using image-based information, that the target object is at the position to trigger pick up the target object using the three or more paddles, instructing the ground-based object picker assembly to move the three or more paddles from a storage height to a pick-up height to pick up the target object from the ground using the three or more paddles to make the pinching movement on the target object while the object-collection system is in movement, wherein the image-based information is utilized to activate movement of the three or more paddles into position to pick up the target object with the pinching movement; wherein the image-based information is utilized to activate movement of the three or more paddles into position to pick up the target object with the pinching movement; picking up the target object from the ground by activating the hinge to make the pinching movement with the three or more paddles on the target object to be picked up while the object-collection system is in movement, and moving the three or more paddles from the pick-up height to the storage height after the target object has been picked. However, in the same field of endeavor, Fevold discloses, identifying one or more objects on a ground in the first plurality of images using a first artificial neural network (Fevold: [0049] The SVM is trained on images containing a particular object, a bale in this case. The SVM classifier makes decisions regarding the presence of the bale using the extracted Haar features); capturing a second plurality of images of the ground from the ground-based object-collection system while the object-collection system is in movement being guided toward the one or more identified objects on the ground (Fevold: [0031-0033]; [0044] produce n image frames per second (examiner: every second there are a "second" plurality of Images and "third" and so on ...); [0063-0064] (examiner: implies while moving); [0068]; [0072]; [0081-0083]; [0093]; [0096]); identifying a target object on the ground in the second plurality of images based on a second dataset of trained object parameters in a second artificial neural network while the object- collection system is in movement (Fevold: [0049] The SVM is trained on images containing a particular object, a bale in this case); determining, using image-based information, when the target object is at a position in at least one of the plurality of images to trigger pick up the target object using three or more paddles (Fevold: [0033-0035]; [0063]; [0073] a left bale loading arm 1204 and a right bale loading arm 1210; [FIG. (1202, paddles)]; paragraphs [0065-0070] disclose bale mapping, [0071-0079] disclose the autonomous bale mover locating bales via stereo cameras and other support sensors), in response to determining, using image-based information, that the target object is at the position to trigger pick up the target object using the three or more paddles (Fevold: [0074] During transport of the autonomous bale mover 1200 … the bale loading system 1202 is typically in the raised state. As the bale loading arms 1204 and 1210 advanced to, and make contact with, a bale 122, the bale loading system 1202 transitions from the raised state to the lowered state. After picking up a bale 122, the bale loading system 1202 moves to the raised state, and the bale mover 1200 searches for a new bale to pick up; [0093]; [0008]), instructing the ground-based object picker assembly to move the three or more paddles from a storage height to a pick-up height to pick up the target object from the ground using the three or more paddles to make the pinching movement on the target object while the object-collection system is in movement (Fevold: [0074]; [0093] continues to move toward the bale and engage the bale while moving forward (not stopping) with the bale arm tracks 1206 and 1212 running, thereby picking up the bale on the move; [0008] The method also comprises picking up located bales by the bale mover without stopping), wherein the image-based information is utilized to activate movement of the three or more paddles into position to pick up the target object with the pinching movement (Fevold: [0074]; [0093] continues to move toward the bale and engage the bale while moving forward (not stopping) with the bale arm tracks 1206 and 1212 running, thereby picking up the bale on the move; [0008] The method also comprises picking up located bales by the bale mover without stopping); picking up the target object from the ground by activating the hinge to make the pinching movement with the three or more paddles on the target object to be picked up while the object-collection system is in movement (Fevold: [0074]; [0093]; [0008] The method also comprises picking up located bales by the bale mover without stopping), and moving the three or more paddles from the pick-up height to the storage height after the target object has been picked, for the benefit of automating tedious and labor intensive work of moving bales of material from a field (Fevold: [0074]; [0093]; [0008] The method also comprises picking up located bales by the bale mover without stopping). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify a method disclosed by Wilkins to include a an automated ground machine with paddles and “scoop and score” for object removal taught by Fevold. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to automate tedious and labor intensive work of moving bales of material from a field. Wilkins, as modified, does not explicitly disclose, three pinching grabbers, wherein one of the three or more paddles includes a hinge that enables a pinching movement to be made with the three or more paddles on the target object to be picked up; wherein each of the three or more paddles includes one or more rotating belts that are able to rotate in one or more of different speeds and different directions; using the three or more paddles and associated rotating belts to re-orient and rotate the target object during pickup. However, in the same field of endeavor, Gielis discloses, three pinching grabbers, wherein one of the three or more paddles includes a hinge that enables a pinching movement to be made with the three or more paddles on the target object to be picked up (Gielis: [0052] the gripper mechanism 130 is provided with three fingers 131, 132, 133 … The fingers are not necessarily identical. The fingers are preferably designed such that the pressure along the length of the finger … all fingers have these properties); wherein each of the three or more paddles (Gielis: [0052] the gripper mechanism 130 is provided with three fingers 131, 132, 133) includes one or more rotating belts (Gielis: [0063] driven by a suitable belt 204 connected to a second pulley (not shown), driven by, for example, an electric motor (not shown). It is clear that the pulley 202 is in this case driven in such a manner that the pulley 202 is rotated to and fro about the rotation point 127A between two predetermined angular positions ... a similar rotation about the rotation point 127, to and fro between two predetermined angular positions during the rotating movement of the gripper mechanism; [0086] on this end of the rotary shaft 420, the second pulley 206 is attached which, together with the belt transmission 204 and the first pulley 202, forms the drive mechanism 200 for the picking element 122) that are able to rotate in one or more of different speeds and different directions (Gielis: [0062] the angular position illustrated in FIG. 5a to the angular position illustrated in FIG. 5c, a rotation in the reverse direction of rotation may take place in order to move the gripper mechanism 130 back to the starting position); using the three or more paddles and associated rotating belts to re-orient and rotate the target object during pickup (Gielis: [0072] in order to achieve an accurate positioning of the robot arm 120 with respect to the fruit to be picked to perform the picking movement along the direction indicated by arrow A. However, the belt drive 740 is advantageous), for the benefit of actuating an end-effector at a specific azimuth and angle to grab an object at its position. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the process disclosed by a modified Wilkins to include more pinch pinchers and rotation, to and fro, taught by Gielis. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to actuating an end-effector at a specific azimuth and angle to grab an object at its position. REGARDING CLAIM 3, Wilkins, as modified, remain as applied above to claim 1, and further, Wilkins also discloses, receiving an order to scan the target geographical area for objects, the order including geographic boundary information for the target geographical area; generating a travel plan to cover the target geographical area based on the geographic boundary information; and instructing the image-collection vehicle to use the travel plan and traverse over the target geographical area to capture the first plurality of images (Wilkins: [FIG. 4(406, 412)]; (Col. 6, Ln. 39 - Col. 7, Ln. 15)). REGARDING CLAIM 4, Wilkins, as modified, remain as applied above to claim 1, and further, Wilkins also discloses, receiving a travel plan indicating a flight path for the image-collection vehicle to move over the target geographical area; controlling movement of the image-collection vehicle along a travel path over the target geographical area based on the travel plan; and capturing the first plurality of images of the target geographical area along the travel path (Wilkins: (Col. 7, Ln. 9-26); [FIG. 2(specified tree)(specified quadrant)]). REGARDING CLAIM 5, Wilkins, as modified, remain as applied above to claim 1, and further, Wilkins also discloses, tracking relative movement of the target object, with respect to the object collection system, across the second plurality of images as the object-collection system is guided toward the one or more identified objects; determining when the target object is in position for the object-collection system to pick up the target object based on the tracked movement; and in response to a determination that the target object is in position for the object-collection system to pick up the target object, instructing the object-collection system to pick up the target object (Wilkins: (Col. 10, Ln. 58-63); (Col. 11, 6-24)). REGARDING CLAIM 6, Wilkins, as modified, remain as applied above to claim 1, and further, Wilkins also discloses, obtaining an estimated boundary of the target geographical area (Wilkins: [FIG.2(quadrant information)]; (Col. 6, Ln. 18-22); (Col. 8, Ln. 63 - Col. 9, Ln. 1-3)); displaying the estimated boundary of the target geographical area to a user (Wilkins: (Col. 6, Ln. 16-25)), wherein the estimated boundary is adjustable by the user (Wilkins: (Col. 12, Ln. 5-13)); receiving user adjustments to the estimated boundary of the target geographical area (Wilkins: see (Col. 12, Ln. 5-13)); generating geographic boundary information for the target geographical area based on the user adjusted estimated boundary of the target geographical area (Wilkins: implied, see (Col. 12, Ln. 5-13)); providing the geographic boundary information to the image-collection vehicle; and traversing the image-collection vehicle over the target geographical area based on the geographic boundary information (Wilkins: implied, see (Col. 12, Ln. 5-13)). REGARDING CLAIM 7, Wilkins, as modified, remain as applied above to claim 1, and further, Wilkins also discloses, receiving an address of the target geographical area (Wilkins: (Col. 10, Ln. 40-45)); obtaining an image of the target geographical area based on the received address (Wilkins: (Col. 1, Ln. 17-19); [FIG. 4] as observed in figure 4, it is implied that during locating and harvesting imaging and identification is performed considering the disclosure as a whole; also see [FIG. 7(710, 712)]; (Col. 2, Ln. 13-17); (Col. 2, Ln. 60-63); (Col. 6, Ln. 48-49); (Col. 7, Ln. 57-60); (Col. 10, Ln. 67 - Col. 11, Ln. 5)); performing image recognition to identify edges of the target geographical area (Wilkins: (Col. 1, Ln. 17-19); [FIG. 4] as observed in figure 4, it is implied that during locating and harvesting, imaging and identification is performed when considering the disclosure as a whole; see [FIG. 7(710, 712)]; (Col. 2, Ln. 13-17); (Col. 2, Ln. 60-63); (Col. 6, Ln. 48-49); (Col. 7, Ln. 57-60); (Col. 10, Ln. 67 - Col. 11, Ln. 5)); and traversing the image-collection vehicle over the target geographical area based on the identified edges of the target geographical area (Wilkins: (Col. 1, Ln. 17-19); [FIG. 4] as observed in figure 4, it is implied that during locating and harvesting, imaging and identification is performed when considering the disclosure as a whole; see [FIG. 7(710, 712)]; (Col. 2, Ln. 13-17); (Col. 2, Ln. 60-63); (Col. 6, Ln. 48-49); (Col. 7, Ln. 57-60); (Col. 10, Ln. 67 - Col. 11, Ln. 5)). In this case, identifying edges is interpreted as identifying/locating a region, location, tree, product, etc. REGARDING CLAIM 8, Wilkins, as modified, remain as applied above to claim 1, and further, Wilkins also discloses, capturing a first set of data using a first sensor at a first altitude above the target geographical area; identifying objects of interest from the first set of data; capturing a second set of data using a second sensor at a second altitude above the target geographical area, the second altitude being lower than that first altitude; and identifying the one or more objects from the second set of data (Wilkins: (examiner: this is top-to-bottom) (Col. 7, Ln. 25-36); (examiner: height) this ensures that the UAV 100 covers each unit of inventory such as, for example, each tree 212, bush, row, shelf, or another unit (Col. 7, Ln. 6-8)). REGARDING CLAIM 9, Wilkins, as modified, remain as applied above to claim 1, and further, Wilkins also discloses, storing geographic boundary information for the target geographical area, the first plurality of images of the target geographical area (Wilkins: (Col. 14, Ln. 19-21); (Col. 6, Ln. 45-47)), and avionic telemetry information associated with the first plurality of images in a target-area database (Wilkins: (Col. 7, Ln. 20-30); (Col. 1, Ln. 16-22)). REGARDING CLAIM 10, Wilkins, as modified, remain as applied above to claim 1, and further, Fevold also discloses, wherein the hinge is a multi-component linkage that includes multiple arms and joints (Fevold: [FIG. 12(1208)(1209)(1214)(1215)(1204)(1210)]). REGARDING CLAIM 11, Wilkins, as modified, remain as applied above to claim 1, and further, Wilkins also discloses, the object information for each of the one or more objects includes a location of a corresponding object within the target geographic area (Wilkins: (Col. 14, Ln. 19-21); (Col. 6, Ln. 45-47)) and an approximate size of the corresponding object (Wilkins: (Col. 4, Ln. 63 - Col. 5, Ln. 3)). REGARDING CLAIM 12, Wilkins, as modified, remain as applied above to claim 1, and further, Wilkins also discloses, displaying a heat map of the target geographical area to a user based on a density of the one or more identified objects across the target geographical area (Wilkins: (Col. 6, Ln. 6-15); (Col. 6, Ln. 16-25)). Claim(s) 2, and 13-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wilkins (US 10796275 B1) in view of Fevold (US 20190294914 A1) and Gielis (US 20200323140 A1) as applied to claim 1 above, and further in view of Li (US 20180109767 A1). REGARDING CLAIM 2, Wilkins, as modified, remain as applied above to claim 1, and further, Wilkins also discloses, capturing avionic telemetry information of the image-collection vehicle when each of the first plurality of images is captured (Wilkins: (Col. 7, Ln. 20-30); (Col. 1, Ln. 16-22)). Wilkins as modified above, does not explicitly disclose, reducing distortion in the first plurality of images based on the avionic telemetry information prior to identifying the one or more objects in the first plurality of images. However, in the same field of endeavor, Li discloses, reducing distortion in the first plurality of images based on the avionic telemetry information prior to identifying the one or more objects in the first plurality of images (Li: [0003]; [0015-0016]; [0049]; [0039], [Claim 28]), for the benefit of ensuring that the quality of the taken images is adequate for element identification by a reviewing user. Li does not explicitly recite the terminology "reducing distortion in the first plurality of images based on the avionic telemetry information prior to identifying the one or more objects in the first plurality of images". However, Li discloses retaking photos when it is determined that the captured images do not meet a quality criterion (blur or out of focus), all for the purpose of identify the associated geographic area, identify geospatial locations [0039], or an estimated geospatial location for the particular digital image [Claim 28]. The examiner respectfully submits that retaking a photo to cure blur is reducing distortion in stored images for further processing. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by a modified Wilkins to include addressing image quality taught by Li. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to ensure that the quality of the taken images is adequate for element identification by a reviewing user. REGARDING CLAIM 13, Wilkins discloses, an image-collection vehicle including: a movement system configured to fly and move the image-collection vehicle over a target geographical area defined by geographic boundary information (Wilkins: (Col. 5, Ln. 18-20); [Fig. 2(102, UAV)(202, field/inventory area)]; (Col. 6, Ln. 4-25)); a first camera (Wilkins: (Col. 3, Ln. 4-5)); a first processor; and a first memory that stores first computer instructions that, when executed by the first processor, cause the first processor to (Wilkins: [Claim 16]): receive a travel plan indicating a travel path for the image-collection vehicle to move over the target geographical area (Wilkins: [Claim 16]; [FIG. 3(306)]); control movement of the image-collection vehicle along the travel path over the target geographical area based on the travel plan (Wilkins: [FIG. 3(308)]); capture, via the first camera, a first plurality of images of the target geographical area along the travel path (Wilkins: (Col. 7, Ln. 20-30)); and capture avionic telemetry information of the image-collection vehicle when each of the first plurality of images is captured (Wilkins: (Col. 1, Ln. 16-22)); an object-detection server including: a second processor; and a second memory that stores second computer instructions that, when executed by the second processor, cause the second processor to: obtain the first plurality of images and the avionic telemetry information for the target geographical area (Wilkins: (Col. 7, Ln. 20-30); (Col. 1, Ln. 16-22)); and determine object information for each of the one or more identified objects that were identified using the first artificial neural network (Wilkins: (Col. 9, Ln. 13-27); (Col. 9, Ln. 28-37)); and a ground-based object-collection system including: a second camera (Wilkins: (Col. 9, Ln. 28-37); (Col. 10, Ln. 20-23); (Col. 14, Ln. 37-50); (Col. 10, Ln. 58-65)); an object-collection system configured to pick up objects off the ground (Wilkins: (Col. 9, Ln. 28-37); (Col. 10, Ln. 20-23); (Col. 14, Ln. 37-50); (Col. 10, Ln. 58-65)); a third processor; and a third memory that stores third computer instructions that, when executed by the third processor, cause the third processor to: obtain the object information for each of the one or more identified objects (Wilkins: (Col. 9, Ln. 13-27)); after the remotely operable image-collection vehicle has completed scanning the target geographical area and processing the first plurality of images, guide the object-collection system over the target geographical area toward the one or more identified objects based on the object information from the first plurality of images (Wilkins: (Col. 9, Ln. 28-37); (Col. 10, Ln. 20-23); (Col. 14, Ln. 37-50); (Col. 10, Ln. 58-65)). Wilkins does not explicitly disclose a second memory and processor. However, Wilkins discloses a processor, one or more non-transitory computer-readable media, and computer-executable instructions that perform limitations that the second memory and processor perform, which reads on a duplication of parts. The examiner respectfully submits that an artificial neural network (ANN) model involves computations and mathematics, or i.e., algorithms. And further, one of ordinary skill recognizes that a UAV that recognizes a level of ripeness, implies a background training. In considering the disclosure of a reference, it is proper to take into account not only specific teachings of the reference but also the inferences which one skilled in the art would reasonably be expected to draw therefrom. In this case, "tracking" is interpreted as locating object from a series of images. Wilkins is replete with recitations of examining/inspecting (scanning) produce and providing location information to a UGV, thus disclosing scanning and guiding. If an object is inspected (imaging device) by a first vehicle (first image stream), and based upon inspection the UGV is guided to a location for a specific task, that is explicitly "guiding a ground-based object- collection system over the target geographical area toward the one or more identified objects based on the object information from the first plurality of images". Wilkins also discloses an automated ground machine for location and picking/grasping specified objects (see at least figures 5a and 6). Wilkins does not explicitly disclose, identify one or more objects on a ground in the reduced distortion first plurality of images based on a first dataset of trained object parameters in a first artificial neural network; capture, via the second camera, a second plurality of images of the ground relative to the object-collection system while the object-collection system is being guided towards the one or more identified objects on the ground; identify a target object on the ground in the second plurality of images based on a second dataset of trained object parameters in a second artificial neural network; determine, using image-based information, when the target object is at a position in at least one of the plurality of images to trigger pick up the target object using three or more paddles, and in response to determining, using image-based information, that the target object is at the position to trigger pick up the target object using the three or more paddles, instruct the ground-based object picker assembly to move the three or more paddles from a storage height to a pick-up height to pick up the target object from the ground using the three or more paddles to make the pinching movement on the target object, wherein the image-based information is utilized to activate movement of the three or more paddles into position to pick up the target object with the pinching movement; pick up the target object from the ground by activating the hinge to make the pinching movement with the three or more paddles on the target object to be picked up, and move the three or more paddles from the pick-up height to the storage height after the target object has been picked. However, in the same field of endeavor, Fevold discloses, identify one or more objects on a ground in the reduced distortion first plurality of images based on a first dataset of trained object parameters in a first artificial neural network (Fevold: [0049]); capture, via the second camera, a second plurality of images of the ground relative to the object-collection system while the object-collection system is being guided towards the one or more identified objects on the ground (Fevold: [0031-0033]; [0044]; [0063-0064]; [0068]; [0072]; [0081-0083]; [0093]; [0096]); identify a target object on the ground in the second plurality of images based on a second dataset of trained object parameters in a second artificial neural network (Fevold: [0049]); determine, using image-based information, when the target object is at a position in at least one of the plurality of images to trigger pick up the target object using three or more paddles (Fevold: [0033-0035]; [0063]; [0073]; [FIG. (1202, paddles)]; paragraphs [0065-0070] disclose bale mapping, [0071-0079] disclose the autonomous bale mover locating bales via stereo cameras and other support sensors), and in response to determining, using image-based information, that the target object is at the position to trigger pick up the target object using the three or more paddles (Fevold: [0074]; [0093]; [0008]), instruct the ground-based object picker assembly to move the three or more paddles from a storage height to a pick-up height to pick up the target object from the ground using the three or more paddles to make the pinching movement on the target object (Fevold: [0074]; [0093]; [0008]), wherein the image-based information is utilized to activate movement of the three or more paddles into position to pick up the target object with the pinching movement (Fevold: [0074]; [0093]; [0008]); pick up the target object from the ground by activating the hinge to make the pinching movement with the three or more paddles on the target object to be picked up (Fevold: [0074]; [0093]; [0008]), and move the three or more paddles from the pick-up height to the storage height after the target object has been picked (Fevold: [0074]; [0093]; [0008]), for the benefit of automating tedious and labor intensive work of moving bales of material from a field. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify a method disclosed by Wilkins to include an automated ground machine with paddles for “scoop and score” object removal taught by Fevold. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to automate tedious and labor intensive work of moving bales of material from a field. Wilkins, as modified, does not explicitly disclose, three pinching grabbers, wherein one of the three or more paddles includes a hinge that enables a pinching movement to be made with the three or more paddles on the target object to be picked up; wherein each of the three or more paddles includes one or more rotating belts that are able to rotate in one or more of different speeds and different directions; using the three or more paddles and associated rotating belts to re-orient and rotate the target object during pickup. However, in the same field of endeavor, Gielis discloses, three pinching grabbers, wherein one of the three or more paddles includes a hinge that enables a pinching movement to be made with the three or more paddles on the target object to be picked up (Gielis: [0052] the gripper mechanism 130 is provided with three fingers 131, 132, 133 … The fingers are not necessarily identical. The fingers are preferably designed such that the pressure along the length of the finger … all fingers have these properties); wherein each of the three or more paddles (Gielis: [0052] the gripper mechanism 130 is provided with three fingers 131, 132, 133) includes one or more rotating belts (Gielis: [0063] driven by a suitable belt 204 connected to a second pulley (not shown), driven by, for example, an electric motor (not shown). It is clear that the pulley 202 is in this case driven in such a manner that the pulley 202 is rotated to and fro about the rotation point 127A between two predetermined angular positions ... a similar rotation about the rotation point 127, to and fro between two predetermined angular positions during the rotating movement of the gripper mechanism; [0086] on this end of the rotary shaft 420, the second pulley 206 is attached which, together with the belt transmission 204 and the first pulley 202, forms the drive mechanism 200 for the picking element 122) that are able to rotate in one or more of different speeds and different directions (Gielis: [0062] the angular position illustrated in FIG. 5a to the angular position illustrated in FIG. 5c, a rotation in the reverse direction of rotation may take place in order to move the gripper mechanism 130 back to the starting position); using the three or more paddles and associated rotating belts to re-orient and rotate the target object during pickup (Gielis: [0072] in order to achieve an accurate positioning of the robot arm 120 with respect to the fruit to be picked to perform the picking movement along the direction indicated by arrow A. However, the belt drive 740 is advantageous), for the benefit of actuating an end-effector at a specific azimuth and angle to grab an object at its position. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the process disclosed by a modified Wilkins to include more pinch grabbers and rotation, to and fro, taught by Gielis. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to actuating an end-effector at a specific azimuth and angle to grab an object at its position. Wilkins, as modified, does not explicitly disclose reduce distortion in the first plurality of images based on the avionic telemetry information. However, in the same field of endeavor, Li discloses, reducing distortion in the first plurality of images based on the avionic telemetry information prior to identifying the one or more objects in the first plurality of images (Li: [0003]; [0015-0016]; [0049]; [0039], [Claim 28]), for the benefit of ensuring that the quality of the taken images is adequate for element identification by a reviewing user. Li does not explicitly recite the terminology "reducing distortion in the first plurality of images based on the avionic telemetry information prior to identifying the one or more objects in the first plurality of images". However, Li discloses retaking photos when it is determined that the captured images do not meet a quality criterion (blur or out of focus), all for the purpose of identify the associated geographic area, identify geospatial locations [0039], or an estimated geospatial location for the particular digital image [Claim 28]. The examiner respectfully submits that retaking a photo to cure blur is reducing distortion in stored images for further processing. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by a modified Wilkins to include addressing image quality taught by Li. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to ensure that the quality of the taken images is adequate for element identification by a reviewing user. REGARDING CLAIM 14, Wilkins, as modified, remain as applied above to claim 13, and further, Wilkins also discloses, a target-area database that stores the geographic boundary information for the target geographical area, the first plurality of images of the target geographical area (Wilkins: (Col. 6, Ln. 6-15); (Col. 6, Ln. 16-25)), and the avionic telemetry information associated with the first plurality of images (Wilkins: (Col. 7, Ln. 20-30); (Col. 1, Ln. 16-22); (Col. 7, Ln. 34-36); (Col. 7, Ln. 64 - Col. 8, Ln. 2)). REGARDING CLAIM 15, Wilkins, as modified, remain as applied above to claim 13, and further, Wilkins also discloses, an object-information database that stores the object information for the one or more identified objects in the target geographical area (Wilkins: [Claim 2]; [Claim 6]). REGARDING CLAIM 16, Wilkins, as modified, remain as applied above to claim 13, and further, Wilkins also discloses, the second processor on the object- detection server is the first processor on the image-collection vehicle (Wilkins: [Claim 15]; [Claim 16]; (examiner: duplication of parts)). REGARDING CLAIM 17, Wilkins, as modified, remain as applied above to claim 13, and further, Wilkins also discloses, the object information for each of the one or more objects includes a location of a corresponding object within the target geographic area (Wilkins: (Col. 14, Ln. 19-21); (Col. 6, Ln. 45-47)) and an approximate size of the corresponding object (Wilkins: (Col. 4, Ln. 63 - Col. 5, Ln. 3)). REGARDING CLAIM 18, Wilkins, as modified, remain as applied above to claim 13, and further, Wilkins also discloses, receive an order to scan the target geographical area for objects, the order including the geographic boundary information for the target geographical area (Wilkins: [FIG. 4(406, 412)]; (Col. 6, Ln. 39 - Col. 7, Ln. 15)); generate the travel plan to cover the target geographical area based on the geographic boundary information (Wilkins: [FIG. 4(406, 412)]; (Col. 6, Ln. 39 - Col. 7, Ln. 15)); and provide the travel plan to the image-collection vehicle (Wilkins: [FIG. 4(406, 412)]; (Col. 6, Ln. 39 - Col. 7, Ln. 15)). Does not explicitly disclose a 4th processor. However, Wilkins discloses the limitations performed while not explicitly disclosing a duplication of parts. REGARDING CLAIM 19, Wilkins, as modified, remain as applied above to claim 13, and further, Wilkins also discloses, a non-transitory computer-readable storage medium that stores fourth computer instructions that, when executed by a fourth processor on a mobile user computer (Wilkins: (Col. 5, Ln. 50-53); (Col. 7, Ln. 64); [Claim 15]; [Claim 16]), cause the fourth processor to: obtain an estimated boundary of the target geographical area (Wilkins: [FIG.2(quadrant information)]; (Col. 6, Ln. 18-22); (Col. 8, Ln. 63 - Col. 9, Ln. 1-3)); display the estimated boundary of the target geographical area to a user (Wilkins: (Col. 6, Ln. 16-25)), wherein the estimated boundary is adjustable by the user (Wilkins: (Col. 12, Ln. 5-13)); receive user adjustments to the estimated boundary of the target geographical area (Wilkins: (Col. 12, Ln. 5-13)); generate the geographic boundary information based on the user adjusted estimated boundary of the target geographical area (Wilkins: (Col. 12, Ln. 5-13)); and provide the geographic boundary information to the image-collection vehicle (Wilkins: (Col. 12, Ln. 5-13)). Does not explicitly disclose a 4th processor. However, Wilkins discloses the limitations performed while not explicitly disclosing a duplication of parts. Wilkins does not explicitly recite the terminology "displaying the estimated boundary of the target geographical area to a user". However, Wilkins discloses providing detailed location data including tree, quadrant, coordinates. The examiner respectfully submits, whether this is presented in text or images, it is still displaying estimated boundaries for a user. REGARDING CLAIM 20, Wilkins, as modified, remain as applied above to claim 19, and further, Wilkins also discloses, execution of the fourth computer instructions by the fourth processor to obtain the estimated boundary of the target geographic area causes the fourth processor to (Wilkins: (Col. 5, Ln. 50-53); (Col. 7, Ln. 64); [Claim 15]; [Claim 16]): receive an address of the target geographical area (Wilkins: (Col. 10, Ln. 40-45)); obtain an image of the target geographical area based on the received address (Wilkins: (Col. 1, Ln. 17-19); [FIG. 4] as observed in figure 4, it is implied that during locating and harvesting imaging and identification is performed considering the disclosure as a whole; also see [FIG. 7(710, 712)]; (Col. 2, Ln. 13-17); (Col. 2, Ln. 60-63); (Col. 6, Ln. 48-49); (Col. 7, Ln. 57-60); (Col. 10, Ln. 67 - Col. 11, Ln. 5)); perform image recognition to identify edges of the target geographical area (Wilkins: (Col. 1, Ln. 17-19); [FIG. 4]; [FIG. 7(710, 712)]; (Col. 2, Ln. 13-17); (Col. 2, Ln. 60-63); (Col. 6, Ln. 48-49); (Col. 7, Ln. 57-60); (Col. 10, Ln. 67 - Col. 11, Ln. 5)); generate the geographic boundary information based on the identified edges (Wilkins: see at least (Col. 6, Ln. 4-40, and continuing Col. 6, Ln. 66 - Col. 7, Ln. 9) for creating a database of regions with most fruit ripe for picking, specified quadrants, and specific tree); and provide the geographic boundary information to the image-collection vehicle (Wilkins: see at least (Col. 6, Ln. 4-40, and continuing Col. 6, Ln. 66 - Col. 7, Ln. 9) for regions with most fruit ripe for picking, specified quadrants, and specific tree, for fruit to be picked by a UAV at time of request). Does not explicitly disclose a 4th processor. However, Wilkins discloses the limitations performed while not explicitly disclosing the duplication of parts. In this case, identifying edges is interpreted as identifying/locating a region, location, tree, product, etc. Response to Arguments Applicant’s arguments with respect to the rejection of the independent claims have been considered but are moot because the new ground of rejection does not rely on the reference combination applied in the prior rejection of record for matter specifically challenged in the argument. To the examiner’s best understanding, the amendments added an additional prong to the two grabber to create a three prong grabber. As cited above, the newly introduced prior art reference of Gielis (US 20200323140 A1) discloses a three pronged grabber ([0052]) to actuate an end-effector at a specific azimuth and angle to grab an object at its position. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Maor (US 20190166765 A1) Bleiweiss (US 20170351933 A1) Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARRON SANTOS whose telephone number is (571)272-5288. The examiner can normally be reached Monday - Friday: 8:00am - 4:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANGELA ORTIZ can be reached at (571) 272-1206. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.S./Examiner, Art Unit 3663 /ANGELA Y ORTIZ/Supervisory Patent Examiner, Art Unit 3663
Read full office action

Prosecution Timeline

Jul 12, 2019
Application Filed
Apr 05, 2022
Non-Final Rejection — §103
Jul 07, 2022
Response Filed
Aug 16, 2022
Final Rejection — §103
Dec 20, 2022
Request for Continued Examination
Dec 21, 2022
Response after Non-Final Action
May 25, 2023
Non-Final Rejection — §103
Oct 04, 2023
Interview Requested
Oct 12, 2023
Examiner Interview Summary
Oct 12, 2023
Applicant Interview (Telephonic)
Nov 06, 2023
Response Filed
Dec 12, 2023
Final Rejection — §103
Apr 19, 2024
Response after Non-Final Action
May 20, 2024
Request for Continued Examination
May 21, 2024
Response after Non-Final Action
Jul 02, 2024
Non-Final Rejection — §103
Oct 25, 2024
Response Filed
Dec 03, 2024
Final Rejection — §103
Apr 17, 2025
Request for Continued Examination
Apr 20, 2025
Response after Non-Final Action
Sep 18, 2025
Non-Final Rejection — §103
Jan 22, 2026
Response Filed
Mar 06, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12482356
TRANSPORT MANAGEMENT DEVICE, TRANSPORT MANAGEMENT METHOD, AND TRANSPORT SYSTEM
2y 5m to grant Granted Nov 25, 2025
Patent 12454311
STEER-BY-WIRE STEERING DEVICE AND METHOD FOR CONTROLLING THE SAME
2y 5m to grant Granted Oct 28, 2025
Patent 12428170
METHODS AND APPARATUS FOR AUTOMATIC DRONE RESUPPLY OF A PRODUCT TO AN INDIVIDUAL BASED ON GPS LOCATION, WITHOUT HUMAN INTERVENTION
2y 5m to grant Granted Sep 30, 2025
Patent 12427974
MULTIPLE MODE BODY SWING COLLISION AVOIDANCE SYSTEM AND METHOD
2y 5m to grant Granted Sep 30, 2025
Patent 12372360
Methods and Systems for Generating Alternative Routes
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
45%
Grant Probability
58%
With Interview (+12.8%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 131 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month