Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
1. This Office Action is in response to a communication filed on 7/17/25 (an IDS).
2. This is a Non-Final Office Action on the merit. Claims 1-20 are currently pending and are addressed below.
3. Examiner notes that the fundamentals of the rejection are based on the broadest reasonable interpretation of the claim language. Applicant is kindly invited to consider the reference as a whole. References are to be interpreted as by one of ordinary skill in the art rather than as by a novice. See MPEP 2141. Therefore, the relevant inquiry when interpreting a reference is not what the reference expressly discloses on its face but what the reference would teach or suggest to one of ordinary skill in the art.
Claim interpretations
4. The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
5. The claims in this application are given their broadest reasonable interpretation (BRI) using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
6. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “...a set of treatment mechanism...” or “...multimode farming machine is configured to...”, or “...a variable rate treatment ...” and “...based on characteristics of the plants ...”
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
7. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
8. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
9. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
10. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “..one or more processors are further configured to...” (see pending claim 10 line 18), or “...the controller is configured to ...” (see pending claim 13 lines 2-3), and “...one or more processors are configured,...to predict depth values” (see pending dependent claim 20, lines 1-2).
Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
11. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
12. If applicant intends to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to remove the structure, materials, or acts that performs the claimed function; or (2) present a sufficient showing that the claim limitation(s) does/do not recite sufficient structure, materials, or acts to perform the claimed function.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 USC. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained. notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary in the art to which the claimed invention pertains. Patentability shall not be negated by the manner m which the invention was made.
13. Claims 1-4, 8, 10, 12-15, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Bodensteiner et al (US Pub. 20180101235 A1, hereafter “Bodensteiner’235”) in view of Czarnecki (US 20230133175 A1 - hereafter “Czarnecki’175”), and in view of Hanson et al (US 20250207365 A1 – hereafter “Hanson’365”).
A. Per independent claims 1, and 10: Bodensteiner’235 teaches a method and an implemented backhoe configured to generate intervention feedback based on object 101 in a work area (see Bodensteiner’235 Fig.1).
Bodensteiner’235 also discloses that “the image recording device 104 may further include distance measuring equipment for measuring a distance between the agent 101 and the material handler equipment 103, such as lidar-based distance measuring equipment, radar-based distance measuring equipment, or sonar-based distance measuring equipment” and see Bodensteiner’235 “the geo-fence 121 may be configured to extend to a predetermined depth into the ground below the material handler equipment 103. This way, the location of a the bucket 10 on the exemplary backhoe embodiment of the material handler equipment 103 may be tracked such that operation of the material handler equipment 103 may be ceased when the bucket 10 is determined to be located at a depth beyond the predetermined depth. By configuring the geo-fence 121 to track the location of the individual components of the material handler equipment 103, the agent 101 may have further back-up control settings to assist the material handler equipment 103 does not operating beyond predetermined safe boundaries.”.
Czarnecki’175 discloses a work-engine having a sensor/camera 170, and a perception view 172 (see Czarnecki’175 FIG. 1, para. [0039]).
Bodensteiner’235 in view of Czarnecki’175 also performs a calibration process, and after the calibration process (see Bodensteiner’235 para. [0046]-[0049]), a user would receive a haptic feedback to recognize a particular object within a perception field 172 (see Bodensteiner’235 para. [0038]).
- during a machine operation stage (see Bodensteiner ‘235 para. [0063] “the location of a bucket 10 on the exemplary backhoe embodiment of the material handler equipment 103 may be tracked such that operation of the material handler equipment 103 may be ceased when the bucket 10 is determined to be located at a depth beyond the predetermined depth”).
Bodensteiner‘235 does not disclose that the work implement moves away to avoid a collision with a perception camera. However, Czarnecki’175 teaches about a method of generating intervention feedback based on objects in a work area with a work machine 100, the work machine having a perception sensor and a work implement 162 associated therewith, the work implement having an available range of movement relative to a frame of the work machine which at least partially extends into a perception field 172 associated with the perception sensor 170, comprising:
- using a perception sensor 170, depth data for a perception field portion with respect to object identified within the perception field; for each of the plurality of positions, based at least in part on identified changes in the depth data with sequential positions of the work implement, generating a multidimensional manifold/”structure” in data storage comprising depth data (see Czarnecki‘175 para. [0065]) associated with the work implement for the plurality of perception field portions including the work implement; and
- determining, via input from at least the one or more perception sensors, current depth data for each of a plurality of perception field portions with respect to objects identified within the perception field (see Czarnecki‘175 para. [0055]);
- determining an intervention event state based on the objects identified within the perception field, further disregarding any objects identified as corresponding to the multidimensional manifold corresponding to a current position of the work implement (e.g., filtering out other signals except signals reflecting from an agent 101“gesture detection device 102 that is worn by the human agent 101 may be positioned to operate outside of the geo-fence 121” see ‘235 [0027]).
Bodensteiner‘235 does not suggest about avoiding a collision by changing direction of a moving object; however, Hanson’365 suggests about “ attempted to avoid the collision by changing the machine direction and slowing down the implement prior to the collision mitigation event”, “...altering a path of the work machine, or preventing a movement of the work machine.” See Hanson’365 para.[0032], and claim 3)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement Bodensteiner ‘235 and Czarnecki’175 with Hanson’365 to change a direction of the work implement to make sure safe issues are followed before using that work machine on the field (see Hanson’365, para. [0003])
B. Per dependent claims 2, and 13: Czarnecki‘175 already suggests that the feedback signals (by using a camera 170 for monitoring) are provided in accordance with at least one intervention event state to a work machine controller for controlling one or more components of the work machine (e.g., controlling an alert generator) to avoid one or more of the objects identified within the perception field (i.e., a human 310 is standing in the field of view 172 of a rear-mounted camera 170, see Czarnecki‘175 para. [0067]).
C. Per dependent claims 3, and 14: Czarnecki‘175 also suggests that the feedback signals (by using a camera 170 for monitoring) are provided in accordance with at least one intervention event state to generate audio and/or visual alerts based at least in part on a spatial proximity of one or more of the objects identified within the perception field with respect to the work machine (see Czarnecki‘175 para. [0068]).
D. Per dependent claims 4, and 15: Czarnecki‘175 also suggests that an intervention alert state is triggered by at least one of the object 310 being nearer to the frame of the work machine 100 than the work implement 162 within a common perception field portion 172 (see Czarnecki‘175 FIG. 5A).
E. Per dependent claims 5, and 16: Czarnecki‘175 also suggests that an intervention alert state is triggered by an object being within a corresponding threshold (see Czarnecki‘175 para. [0068],[0071]).
F. Per claims 8, and 19: Czarnecki‘175 already teaches about a multidimensional manifold (e.g., a depth dimension, see Czarnecki‘175 “a depth disparity view” para. [0065]) structure/images) is generated for a positions using a stored pattern/model (e.g., “...are matched against stored patterns ” (see Bodensteiner’235 para. [0060]) for a structure for the work implement.
14. Claims 6, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Bodensteiner’235 in view of Czarnecki‘175, in view of Hanson’365, and in view of Cella et al (US 20210287459 A1).
The rationales and references to reject claim 5 are incorporated.
Bodensteiner’235 in view of Czarnecki‘175, in view of Hanson’365 do not disclose that alert states are respectively dependent on a travel speed of the work machine and a distance separating the at least one of the one or more objects; however, Cella’459 suggest this feature (see Cella’459 para. [0415] “e.g., by providing forward-sensing alerts at greater distances and/lower speeds than in good weather, by providing automated braking earlier and more aggressively than in good weather, and the like)”.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement Bodensteiner‘235, Czarnecki‘175, Hanson’365, with Cella’459 to appreciate a concept of speed against achieved distance to timely react a potential collision using a sensor (see Cella’459 para. [0904]).
15. Claims 7, 12, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Bodensteiner’235, in view of Czarnecki’175, in view of Hanson’365, and in view of Google’s definition.
The rationales and references to reject claim 10 are incorporated.
Bodensteiner’235 in view of Czarnecki’175, in view of Hanson’365 do not disclose that a perception field portion is corresponding to respective pixels/”resolution” in a field of view for a perception sensor (see Czarrecki’175 para. [0054]); however, Google defines a proportional relationship between a pixel density and FOV resolutions “Camera pixels in a field of view (FOV) determine image resolution, with higher pixel counts or narrower fields of view providing greater detail (higher spatial resolution). The pixels per unit area, or Instantaneous Field of View (IFOV), defines how much detail is captured,
PNG
media_image1.png
47
181
media_image1.png
Greyscale
Instantaneous Field of View (IFOV): This represents the angle or physical area covered by a single pixel. Smaller IFOV means better resolution.
Pixel Density: High pixel density allows for finer detail, crucial for identifying objects.
To maintain high image quality across a wide field of view, it is necessary to use a camera with higher megapixels.”.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement Bodensteiner ‘235, Czarrecki’175, Hanson’36, with Google to appreciate a straight relationship between a sensor/camera’s pixel density and FOV.
16. Claims 9, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Bodensteiner’235, in view of Czarnecki’175, in view of Hanson’365, further in view of Kean (US Pub 20230038266 A1).
The rationales and references to reject claim 10 are incorporated.
Czarnecki’175 para. [0055], teaches that “image data from any one or more image data sources may be provided for three-dimensional point cloud generation, image segmentation, object delineation and classification, and the like”
Bodensteiner‘235 teaches a feature of predicting depth values (e.g., “...are matched against stored patterns” (see Bodensteiner‘235 para. [0060]) for perception field portions as corresponding to the work implement.
Bodensteiner‘235 in view of Czarnecki’175 does not suggest about verifying a stored model; however, Kean’260 suggests that idea (see Kean‘260 para.[0086] “best fit may be performed” (by using stored models) based on captured depth values at the respective perception field portions (taught by. Bodensteiner‘235 in view of Czarnecki’175).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement Bodensteiner‘235, Czarnecki’175, Hanson’36, with Kean ‘260 to verify a model from a stored database for a match in order to use a best-fit characteristic of an object (see Kean ‘260 para.[0086]).
Conclusion
17. Pending claims 1-20 are rejected.
18. Remarks:
PNG
media_image2.png
523
774
media_image2.png
Greyscale
- Jo, Byung-Wan, Yun-Sung Lee, Jung-Hoon Kim, Do-Keun Kim, and Pyung-Ho Choi. "Proximity warning and excavator control system for prevention of collision accidents." Sustainability 9, no. 8 (2017): 1488.
- Perception sensors are cameras, LiDARs, RADARs, thermal cameras, ultrasonics, infrareds, ... They work really well alone, but also fused together to provide an accurate understanding of the objects around. Localization sensors are sensors like GPS, RTK-GPS, or even workaround solutions like Ultrawide band
19. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Cuong H Nguyen whose telephone number is (571) 272-6759 (email address is cuong.nguyen@uspto.gov). The examiner can normally be reached on M - F: 9:30AM- 5:30PM. Examiner interviews are available via telephone, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BENDIDI RACHID can be reached on (571) 272-4896. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only, For more information about the PAIR system, see https//ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll- free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CUONG H NGUYEN/Primary Examiner, Art Unit 3664