Prosecution Insights
Last updated: April 19, 2026
Application No. 17/957,154

DETECTION OF OBJECT STRUCTURAL STATUS

Final Rejection §103
Filed
Sep 30, 2022
Examiner
FELIX, BRADLEY OBAS
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Amazon Technologies, Inc.
OA Round
2 (Final)
12%
Grant Probability
At Risk
3-4
OA Rounds
3y 6m
To Grant
78%
With Interview

Examiner Intelligence

Grants only 12% of cases
12%
Career Allow Rate
2 granted / 17 resolved
-50.2% vs TC avg
Strong +67% interview lift
Without
With
+66.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
29 currently pending
Career history
46
Total Applications
across all art units

Statute-Specific Performance

§101
8.5%
-31.5% vs TC avg
§103
62.9%
+22.9% vs TC avg
§102
14.3%
-25.7% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 17 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed on 08/13/2025 has been entered and made of record. The applicant has canceled claims 3-4 and 6, and withdrawn claims 16-21. The applicant has pending claims 1-2, 5, and 7-15. Response to Arguments Applicant’s arguments, see Remarks pages 1-3, filed 08/13/2025, with respect to the rejections of claims 5-7 and 13 under 35 U.S.C.102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new grounds of rejection is made in view of FAERS, in combination with Li and Changying as used in the amended claim 1. Additionally, Applicant mentions in page 2 of the Remarks that “Li and Pan, taken alone or in combination, fail to teach or suggest at least the limitations of…”. Examiner notes that Li was not used in combination with Pan in the 35 U.S.C. 102 rejections of claims 5-7 and 13; instead it was only Pan given it was a 102 rejection. Applicant’s arguments with respect to claim 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Therefore, the new ground of rejection is made in view of FAERS, in combination with Li and Changying. Thus, this action is made FINAL. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 5, 7, and 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Malcolm FAERS US-20230029636-A1, hereinafter FAERS, in further view of Shuping Li, et al (included within the applicant’s IDS dated 1/30/2024), hereinafter Li, and Changying Li et al., hereinafter Changying. As per claim 1, FAERS discloses a method comprising:obtaining a training data set of sample stereoscopic images indicating reactions of sample objects to a predetermined amount of air impacting the sample objects (see FAERS ¶57-58, wherein training image data is disclosed. See also ¶81, wherein stereo vision sensor for stereoscopic images is disclosed. See further ¶109-110, wherein air is blown onto crops, i.e., sample objects);labeling the training data set to indicate (see FAERS ¶57, wherein tagging the images is disclosed):first reactions of first sample objects, known to have a desired structural status (see FAERS ¶55-57, wherein the disease of the crop is determined and measured, and though not explicitly disclosed, the crop would be healthy if no diseases were identified), to the air (see FAERS ¶109-110, wherein the UAV blows air at the abaxial side of the leaves of the crop)); andsecond reactions of second sample objects, known to have an undesired structural status (see FAERS ¶55-57, wherein the disease of the crop is determined and measured), to the air (see FAERS ¶109-110, wherein the UAV blows air at the abaxial side of the leaves of the crop);training an object model, via supervised machine learning, to identify predictive features in the training data set that are predictive of the desired structural status or the undesired structural status (see FAERS ¶55, wherein machine learning is trained and used to determine if a crop is suffering from any type of nutritional deficiency, i.e., undesired structural status, else it would be desirable);positioning a scanning device at a target position relative to an object, wherein the scanning device is an aerial drone comprising (see FAERS ¶43 and FIG. 5, wherein a UAV approaching a crop is disclosed):a stimulus source configured to output the predetermined amount of air (see FAERS ¶109, wherein the UAV comprises a plurality of air blower modules); andcausing the air to be output, from the stimulus source, towards the object based on the scanning device being at the target position relative to the object (see FAERS ¶109-110, wherein the UAV blows air at the abaxial side of the leaves of the crop);capturing, via the camera, stereoscopic images indicating a reaction of the object to the predetermined amount of air (see FAERIS ¶72, wherein the plurality of images is disclosed. See also ¶81, wherein stereo vision sensor for stereoscopic images is disclosed); andpredicting, by the object model, a structural status of the object based on instances of the predictive features indicated in the stereoscopic images (see FAERS ¶57, wherein the machine model determines if the crop is diseased, i.e., structural status, based on its image). However, FAERS fails to explicitly disclose a predetermined amount of air and multiple cameras where Li teaches:second reactions of second sample objects, known to have an undesired structural status, to the predetermined amount of air (see Li page 8/14 and Table 1, wherein an air-puff with a correlation coefficient of 0.80 is used to determine the firmness of a blueberry. It would be obvious that a blueberry that is below this correlation coefficient is considered unfirm and undesirable);causing the predetermined amount of air to be output (see Li page 8/14 and Table 1, wherein an air-puff with a correlation coefficient of 0.80 is used);at least two cameras (see Li page 5/14, wherein multiple cameras are disclosed in the camera vision system);capturing, via the at least two cameras, images (see Li page 5/14, wherein multiple cameras are disclosed in the camera vision system); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify FAERS’s method by using Li’s teaching by include the predetermined amount of air and multiple cameras to the reaction of objects in order to have a precise measurement of air for the reaction and using multiple cameras to obtain more perspectives for the stereoscopic vision. However, FAERS, in combination with Li, fails to explicitly disclose where Changying teaches:wherein the first reactions of the first sample objects (see Changying page 4-5/6 Maximum displacement measurement by laser air-puff, wherein the firmness measurement of the blueberries (freshly harvested) is disclosed) and the second reactions of the second sample objects (see Changying page 4-5/6 Maximum displacement measurement by laser air-puff, wherein the firmness measurement of the blueberries (week 1) is disclosed) are associated with different vibrations induced by the instances of the predetermined amount of air (see Changying page 3/6 Laser air-puff test, wherein the displacement, or vibrations, is used to measure the reaction of the blueberries to the air puff. See further page 4-5/6 Maximum displacement measurement by laser air-puff, wherein the firmness of a freshly harvested blueberry and an aged blueberry are measured); andthe reaction of the object comprises vibrations caused at least in part by waves, induced by the predetermined amount of air impacting the object, propagating through an internal structure of the object (see Changying page 3/6 Laser air-puff test and FIG. 1, wherein the deformation waveform using the air-puff instrument is disclosed. The displacement, or vibrations, and springiness of the blueberry are measured with the deformation waveform, and the firmness, i.e., the solidity of the internal structure, is obtained); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify FAERS’s, in combination with Li, method by using Changying’s teaching by including the vibrations of the air puff to the reaction measurements in order to further determine the firmness, or structural, reaction of the produce. As per claim 2, FAERS, in combination with Li and Changying, discloses the method of claim 1, wherein:the object is a piece of produce, the undesired structural status is indicative of spoiled produce, and the structural status predicted by the object model indicates whether the piece of produce is likely to have the desired structural status or the undesired structural status (see FAERS ¶55, wherein machine learning is trained and used to determine if a crop is suffering from any type of nutritional deficiency, i.e., undesired structural status or spoiled. If the crop does not suffer from any deficiencies, the status is desirable). As per claim 5, the rationale provided in claim 1 is incorporated herein. Additionally, Changying discloses the stimulus agent is a puff of air (see Changying page 3/6 Laser air-puff test and FIG. 1, wherein the deformation waveform using the air-puff instrument is disclosed). As per claim 7, FAERS, in combination with Li and Changying, discloses The method of claim 5, wherein:the training data set is further labeled to indicate a plurality of reasons associated with the second sample objects having the undesired structural status (see FAERS ¶57, wherein the size, location, and specific diseases are included in the tagged imagery), the object model is trained, based at least in part on the training data set, to identify second predictive features that are predictive of the plurality of reasons associated with the second sample objects having the undesired structural status (see FAERS ¶57, wherein the machine learning analyzer includes the crop disease information, which is used to determine if a crop has a disease), and the structural status, predicted by the object model based at least in part on the sensor data, indicates that the object is likely to have the undesired structural status and at least one reason associated with the undesired structural status (see FAERS ¶57, wherein the machine learning analyzer is trained on the tagged imagery. The trained model is used to determine the diseases on the crop). As per claim 11, FAERS, in combination with Li and Changying, discloses the method of claim 5, further comprising:positioning, at different times, the scanning device proximate to different objects in an environment (see FAERS ¶64, wherein the UAV containing the scanning device flies to different crops within the harvest field);causing the stimulus agent to be output, from the stimulus source, towards the different objects at the different times (see Changying page 2/6 Section Laser air-puff test, wherein the pressurized air is projected towards a blueberry);capturing, via the one or more sensors at the different times, different instances of the sensor data indicating reactions of the different objects to the stimulus agent (see Changying pages 3-4 Section Results and Discussion and Figures 2-3, wherein the firmness measurement of the berries each week is recorded or captured); andpredicting, by the object model, structural statuses of the different objects based at least in part on the different instances of the sensor data (see Changying page 4 Section Maximum displacement, wherein the firmness of the different blueberries is measured or predicted using the firmness index). As per claim 12, FAERS, in combination with Li and Changying, discloses the method of claim 11, further comprising selecting the different objects from a set of objects in the environment at random or based at least in part on a grid pattern within the environment (see Changying page 2/6 Section Firmness Test, wherein the berries were selected randomly in the harvest environment). Claims 8 is rejected under 35 U.S.C. 103 as being unpatentable over FAERS, in combination with Li and Changying, in further view of Lili Zhu et al (included within the applicant’s IDS dated 1/30/2024), hereinafter Zhu. As per claim 8, FAERS, in combination with Li and Changying, discloses the method of claim 5, wherein: the one or more sensors are cameras, the sensor data comprises objects, and the method further comprises determining disparities between the objects that indicate the reaction of the object as vibrations or movements caused at least in part by waves, induced by an impact of the stimulus agent on an exterior of the object, propagating through an internal structure of the object (see Changying page 4-5/6 Section Maximum displacement and Table 1 and Figures 2-3, wherein the differences between the objects using the air-puff method to measure firmness is disclosed) However, FAERS, in combination with Li and Changying, fails to explicitly disclose where Zhu teaches: the one more sensors are cameras, the sensor data comprising stereoscopic images (see Zhu page 2/17, wherein stereoscopic images are used to determine the irregular shape of an object according to its volume and mass) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify FAERS’s, in combination with Li and Changying, method by using Zhu’s teaching by including stereoscopic images to the sensor data in order to further include depth data for better measurement of the interior of the object. Claims 9-10 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over FAERS, in combination with Li and Changying, in further view of Zhu. As per claim 9, FAERS, in combination with Li and Changying, fails to explicitly disclose where Zhu teaches:The method of claim 5, further comprising selecting the object model from an object model database storing a plurality of different object models corresponding to different types or classifications of objects, based at least in part on a type or classification of the object (see Zhu page 5/17 Section 2.2.3 and Fig. 3 and Table 3, wherein machine learning classification of the different types of foods is disclosed. See also Table 4, wherein the MVS is used with a traditional ML wherein it retrieves classification from a labeled set as disclosed in page 7/17 Section 4.1). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify FAERS’s, in combination with Li and Changying, method by using Zhu’s teaching by including the object model database to the machine learning model in order to improve the model by having a reference of the objects that are being analyzed or classified. As per claim 10, FAERS, in combination with Li and Changying and Zhu, discloses the method of claim 9, further comprising: using at least two object models, of the plurality of different object models, to predict at least two structural status predictions based at least in part on the sensor data in association with corresponding confidence levels (see Zhu page 8/17 Section 4.1.2, wherein the machine model observes apples to determine if they are bruised, unbruised, or if there was a glare); anddetermining the type or classification of the object based at least in part on one structural status prediction, of the at least two structural status predictions, that is associated with a highest confidence level of the corresponding confidence levels (see Zhu bottom of page 8/17 Section 4.1.2, wherein the detection of bruised apples had a 98% accuracy). As per claim 14, FAERS, in combination with Li and Changying, fails to explicitly disclose where Zhu teaches:The method of claim 5 wherein, the object model executes via one or more computing resources of a service provider network, and the method further comprises sending the sensor data from the scanning device to the object model via at least one network (see Zhu top of page 5/17, wherein the support vector machine assists the machine learning model in interpreting the information obtained from the image, i.e., sensor data from the scanning device). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify FAERS’s, in combination with Li and Changying, method by using Zhu’s teaching by including a service provider network to the object model in order to further include an additional computer that can access the data via a network. As per claim 15, FAERS, in combination with Li and Changying, fails to explicitly disclose where Zhu teaches:The method of claim 5, wherein the object model executes, to predict the structural status of the object, via one or more edge computing devices associated with the scanning device (see Zhu page 10/17 Section 4.2.1, wherein edge computation is disclosed. See also page 14/17 Section 5, number 5). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify FAERS’s, in combination with Li and Changying, method by using Zhu’s teaching by including edge computation to the computing device in order to more accurately obtain the structure of the object. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Bradley Obas Felix whose telephone number is (703)756-1314. The examiner can normally be reached M-F 8-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at 5712728243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRADLEY O FELIX/Examiner, Art Unit 2671 /VINCENT RUDOLPH/Supervisory Patent Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Sep 30, 2022
Application Filed
May 06, 2025
Non-Final Rejection — §103
Jul 21, 2025
Applicant Interview (Telephonic)
Jul 21, 2025
Examiner Interview Summary
Aug 13, 2025
Response Filed
Nov 12, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592076
OBJECT IDENTIFICATION SYSTEM AND METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12340540
AN IMAGING SENSOR, AN IMAGE PROCESSING DEVICE AND AN IMAGE PROCESSING METHOD
2y 5m to grant Granted Jun 24, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
12%
Grant Probability
78%
With Interview (+66.7%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 17 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month