Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. BR1020210198168, filed on 10/01/2021.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 03/2//2024 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-15 are rejected under 35 U.S.C. 112, first paragraph, as failing to comply with the enablement requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention.
Based on the breadth of the claim language and the high-level nature of the disclosure reflected in the claim set, the following full-scope enablement and Written Description (WD) issues arise (Wands factors: breadth, predictability, amount of guidance, quantity of experimentation, state of the art):
AI “for navigation and decision-making in pest identification and control” (claim 1; claim 3 “deep learning”).
Enablement: As claimed, this encompasses all algorithms that achieve navigation and pest classification/decisions across diverse crops, geographies, lighting, occlusion, and pest/pathogen phenotypes. Without disclosed architectures, training regimes, datasets, domain adaptation, latency/compute constraints, and failure handling, practicing the full scope would require undue experimentation.
Written description: The claims outstrip any specific embodiments unless the spec details concrete models, pipelines, and training evidence tied to the claimed functions.
Fix: Add/claim specific architectures and training methods (For example—perception/fusion, model architecture, quantization, latency budgets, confidence thresholds).
Actuator operation (“laser device or suction pump”).
Enablement: Safe/effective laser ablation of pests without crop damage depends on beam modulation, dwell time, scan path, thermal feedback, safety interlocks (e.g., ISO 60825). If not disclosed, full-scope operation is not enabled.
Fix: Disclose and claim control laws, safety interlocks, and measurable performance (e.g., maximum collateral heating %, scan frequency ranges).
Tele-operation/telemetry (“embedded servers,” radios, “real-time failure identification”).
Enablement: Guaranteeing real-time performance for control/alerts requires architectural details (scheduling, QoS, buffering, loss handling). Absent specifics, full-scope “real-time” behavior is not enabled.
Fix: Disclose protocol stacks, timing constraints, buffering strategies, and quantitative latency/reliability targets; claim them.
Mechanical extensions (“sliding portion that comprises a metal bar”).
Enablement/WD: For 5-DoF actuation plus sliding extension, load, stiffness, positional accuracy, lockout/limit handling should be disclosed to support autonomous operation claims.
Fix: Provide structural specifics (joint types, ranges, tolerances) and control implications; narrow the claim accordingly.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-15 are rejected under 35 U.S.C. 112, second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which applicant regards as the invention.
Unless otherwise indicated, limitations quoted are from claim 1.
“embedded artificial intelligence algorithms for navigation and decision-making in pest identification and control”
Issue: Purely result-oriented functional language that fails to specify how (architecture, inputs/outputs, constraints, control logic) the AI accomplishes navigation/decision-making. A PHOSITA cannot ascertain the metes and bounds—virtually any algorithm qualifies.
Fix: Recite specific inputs/outputs and technical steps (e.g., perception pipeline, fusion method, control policy, timing/latency constraints) that delimit the scope.
“embedded servers”
Issue: Ambiguous term. “Server” can mean hardware or software role; “embedded” is unclear (onboard microserver? SBC? virtualized service?).
Fix: Define the server as structure (processor/memory/OS) or as a software service with a concrete function and location (onboard vs. offboard; edge vs. cloud).
“a horizontal structural base”
Issue: “Horizontal” is frame-dependent and ambiguous in mobile platforms (grade/tilt?). Scope is unclear.
Fix: Define with respect to the robot body frame and permissible tilt/roll ranges (e.g., “base plate defining the x-y plane of the chassis within ±3° in nominal operation”).
“at least one control element endowed with five degrees of freedom”
Issue: “Control element” is vague; “five degrees of freedom” is ambiguous as to which DoF (which axes, which prismatic/rotational joints). Enforcement becomes uncertain.
Fix: Recite joint sequence and type (e.g., “a 5-DoF arm comprising R-R-P-R-R joints about [axes], with ranges [°]/[mm]”).
“bearing at least one 360 camera”
Issue: “360 camera” is not a standard term of art with precise scope (fisheye, dual-fisheye, multi-camera stitch?).
Fix: Specify field-of-view coverage (e.g., “omnidirectional camera providing ≥ 360° azimuth and ≥ 180° elevation coverage”) and sensor arrangement if relevant.
“at least two lateral depth cameras”
Issue: “lateral” is reference-frame ambiguous (relative to base? travel direction?).
Fix: Define lateral w.r.t. body frame (e.g., +/−Y axes relative to chassis), mounting location, and FOV overlap.
“at least one in-use signaling device”
Issue: Indefinite—could be a lamp, buzzer, screen, radio beacon. No scope boundaries.
Fix: Identify type and function (e.g., “visual status lamp emitting ≥ X lumens during active pest-control actuation”).
“at least one positioning and location device”
Issue: Redundant/ambiguous pairing (“positioning” vs. “location”).
Fix: Use standard terms (e.g., “RTK-GNSS receiver and IMU”) and/or define accuracy/availability metrics.
Dependent claims (selected):
Claim 10 (telemetry/energy sensing) & Claim 11 (“real-time failure identification, generating alerts and alarms”): Indefinite as to what constitutes “failure,” detection thresholds, and “real-time.”. Fix: Define monitored variables, thresholds, detection windows (e.g., “<50 ms end-to-end alerting”) and alert modalities.
Claim 12 (3G/4G/5G/Wi-Fi/XBee): Merely listing radio types without protocol/QoS leaves scope unclear for “support.”. Fix: Specify protocol layers, bitrate, and latency/packet-loss tolerances tied to control loop.
Claim 13 (“controlled remotely, either near-field or long-distance”): Indefinite terms (“near-field,” “long-distance”). Fix: Define ranges (e.g., “<10 m BLE link” vs. “>500 m cellular link”).
Claim 14 (“hierarchical security and control levels”): Indefinite; hierarchy depth/roles unstated. Fix: Recite at least two named levels with permissions/constraints (e.g., operator vs. supervisor with defined overrides).
Claim 15 (“sliding portion that comprises a metal bar”): Scope of “sliding portion” (linear? telescopic?) is unclear. Fix: Define mechanism (e.g., “telescoping prismatic joint with stroke S and lockout detents at …”).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims Rejected: Claims 1–15 are rejected under 35 U.S.C. §101 because the claims are directed to a judicial exception (an abstract idea—a mental process) and do not integrate that exception into a practical application, nor do they recite an “inventive concept” that amounts to significantly more than the exception itself.
I. Application & Claim Context
Independent claim 1 recites an “autonomous robot platform for autonomous crop pest identification and control” including: “embedded artificial intelligence algorithms for navigation and decision-making in pest identification and control”; a computing/communication stack (e.g., servers/telemetry); mobility hardware (base, supports, wheels/casters, chain drive); a control element bearing a camera and a laser or suction device; lateral depth cameras; signaling; and a positioning device (e.g., GPS/RTK). Dependent claims add details such as GPS/RTK + depth-camera navigation, shock absorbers, emergency stop, solar panels, telemetry with real-time alerts, radio stacks (3G/4G/5G/Wi-Fi/XBee), remote control, hierarchical control/security, and a sliding metal bar extension.
II. Step 1 (Statutory Category)
The claims are drawn to a machine. Step 1 is satisfied.
III. Step 2A (Prong 1): Do the claims recite a judicial exception?
Yes. Properly framed, the claims are directed to the mental process of an agricultural worker: observing a plant, identifying a pest, and deciding to eliminate it. The claims merely automate that human cognitive workflow with “AI algorithms for navigation and decision-making.” Mental processes are a judicial exception. See CyberSource v. Retail Decisions, 654 F.3d 1366, 1372 (Fed. Cir. 2011) (methods performable by human thought alone are abstract); Electric Power Group v. Alstom, 830 F.3d 1350, 1354–55 (Fed. Cir. 2016) (collect–analyze–decide is abstract); SAP v. InvestPic, 898 F.3d 1161, 1163–67 (Fed. Cir. 2018) (mathematical analyses/inferences are abstract); In re TLI, 823 F.3d 607, 613–15 (Fed. Cir. 2016) (classification using generic imaging/servers is abstract). Implementing a mental process with “AI” does not alter its abstract character. Alice, 573 U.S. at 221–23.
IV. Step 2A (Prong 2): Is the exception integrated into a practical application?
No. Although claimed as a system with physical components, the claims do not integrate the abstract mental process into a practical application imposing a meaningful limit on the exception. Two independently sufficient grounds:
Functional, result-oriented claiming of a generic machine: The robot is defined by the result it achieves—performing the abstract process of pest identification and control—using conventional parts in their ordinary roles (moving, sensing, communicating, actuating). The claims lack any specific improvement to robotics/computing (e.g., defined perception pipeline, sensor fusion algorithm, model architecture/loss/training regime, control law, planner, latency/throughput architecture, or interlock/modulation scheme) that improves computer/robot performance qua technology. This is analogous to system claims found ineligible in Interval Licensing, LLC v. AOL, Inc., 896 F.3d 1335, 1345 (Fed. Cir. 2018) (physical components “nothing more than conventional elements that operate in their normal manner to accomplish the abstract idea”) and ChargePoint, Inc. v. SemaConnect, Inc., 920 F.3d 759, 769–75 (Fed. Cir. 2019).
Actuation as insignificant post-solution activity: The laser/suction merely executes the decision (i.e., “zapping”/“sucking”) after classification. Such physical execution of a decision, without a claimed improvement to actuator control or safety, is an insignificant post-solution activity that does not integrate the abstract idea. Bilski v. Kappos, 561 U.S. 593, 610–11 (2010); Electric Power, 830 F.3d at 1354–55.
Therefore, the claims remain directed to the mental process, with the robot serving as a generic vehicle for hosting/executing the abstract idea. See MPEP 2106.04(d).
V. Step 2B (“Significantly More”): Is there an inventive concept?
No. Even assuming arguendo some integration at Prong 2, the claims lack an “inventive concept.”
A. Individually (WURC)
Each component—cameras/depth cameras, GPS with RTK, articulated element, laser/suction, mobile base with wheels/casters/chain drive, shock absorbers, emergency stop, solar panels, servers/telemetry using 3G/4G/5G/Wi-Fi/XBee, remote control, hierarchical control/security, sliding bar—reflects well-understood, routine, and conventional hardware performing ordinary functions in mobile robotics/field automation. See Berkheimer v. HP, 881 F.3d 1360, 1368 (Fed. Cir. 2018) (WURC is factual; conventionality supported by the specification’s characterizations of commercial/standard components).
B. As an Ordered Combination (explicit tie to the mental process)
As an ordered combination, the elements are arranged to do nothing more than automate the steps of the underlying mental process:
the “sense” block (cameras/depth cameras, GPS/RTK) corresponds to the human observing the plant;
the “plan” block (the recited “AI algorithms for navigation and decision-making”) corresponds to the human identifying and deciding whether to eliminate a pest; and
the “act” block (laser/suction) corresponds to the human eliminating the pest.
This sense → plan → act pipeline is the inevitable, conventional structure one would employ to automate that very mental process. It neither reflects a non-conventional technical arrangement nor provides a technological improvement to the computer/robot itself. See Electric Power, 830 F.3d at 1355 (conventional steps arranged to achieve an abstract objective are not an inventive concept); Interval Licensing, 896 F.3d at 1345; ChargePoint, 920 F.3d at 769–75. By contrast, cases finding an inventive concept claim a particularized architecture/arrangement or a specific improvement to computing/robotics (BASCOM, 827 F.3d 1341; Enfish, 822 F.3d 1327; McRO, 837 F.3d 1299)—which is absent here.
Conclusion (Step 2B): Neither the individual elements nor their ordered combination supplies an inventive concept under Alice.
VI. Representative Claim (Claim 1)
Abstract idea (Prong 1): Automating the agronomic mental process (observe → identify → decide).
No integration (Prong 2): The system is claimed functionally and serves as a generic vehicle for the abstract idea; actuation is insignificant post-solution activity.
No inventive concept (Step 2B): Conventional components, conventionally arranged as a sense → plan → act pipeline that mirrors the mental process itself.
VII. Dependent Claims
None of the dependent limitations cures the 101 defect:
Claim 2 (GPS/RTK + depth cameras): Conventional positioning/perception; no specific fusion, error model, or fallback architecture improving navigation technology.
Claim 3 (deep learning pest ID): “Deep learning” is recited at a result-only level; without a concrete technical improvement, it reinforces abstraction. SAP, 898 F.3d at 1163–67.
Claims 4–6 (wheels/casters; powered wheels; chain drive): Routine mobility hardware selections.
Claim 7 (shock absorbers); Claim 8 (emergency stop): Standard safety/suspension components used conventionally.
Claim 9 (solar panels): Generic power-source choice.
Claims 10–12 (telemetry/alerts; radio stacks): Ordinary telemetry/networking; no novel edge/network architecture or QoS improvement.
Claim 13 (remote control): Conventional manual/remote modes.
Claim 14 (hierarchical security/control): High-level supervisory policy without specific control architecture—purely functional.
Claim 15 (sliding metal bar): Routine reach-extension; insignificant post-solution activity in the Bilski sense.
VIII. Examiner Conclusion
Under the 2019 PEG, the claims are (i) directed to a judicial exception (a human mental process automated by AI) and (ii) fail to integrate that exception into a practical application or to recite significantly more. Claims 1–15 are ineligible under 101.
IX. Amendment Guidance
To overcome 101, claim the “how,” not merely the “what.” Present claims are (i) result-oriented (“AI for navigation and decision-making”) and (ii) built on generic robotic components operating conventionally. Amend to recite specific, non-conventional technical mechanisms that improve the functioning of the robot/computer stack itself (not merely field outcomes).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 4, 13 are rejected under 35 U.S.C. 103 as being unpatentable over Santhosh et al. (AU2021101399A4), herein after will be referred to as Santhosh, in view of Letsky et al. (US20140180478A1), herein after will be referred to as Letsky, and in view of Koselka et al. (US20060213167A1), herein after will be referred to as Koselka.
Regarding Claim 1,
Santhosh discloses An autonomous robot platform (see at least Abstract: “An agriculture smart robot device”: Rationale: Expressly disclose an autonomous agriculture robot device, satisfying the claimed “autonomous robot platform” element directly and unambiguously) for autonomous crop pest identification (see at least Abstract: ““weeding… a computer or artificial intelligence system that can sense and decide… coupled with, machine vision… laser rastering.”: Rationale: Weeding” plus “AI… sense and decide” with “machine vision” teaches autonomously identifying crop pests (weeds) prior to control actions) and control (see at least Abstract: “weeding… coupled with, machine vision, laser rastering…” and “a robot or server can create an action plan that a robot may implement.”: Rationale: “Weeding” via “laser rastering” and executing an “action plan” constitute autonomous pest control actions, satisfying the claim’s control requirement directly), the autonomous robot platform (see at least Abstract: “An agriculture smart robot device”: Rationale: Expressly disclose an autonomous agriculture robot device, satisfying the claimed “autonomous robot platform” element directly and unambiguously) comprising: embedded artificial intelligence algorithms (see at least Abstract: “a computer or artificial intelligence system that can sense and decide before acting on the work object”: Rationale: Santhosh explicitly teaches an onboard AI system making decisions before acting, which meets “embedded artificial intelligence algorithms” as claimed) for navigation (see at least Abstract: “A robot moves through a field first to ‘map’ the plant locations… Once the map is complete, a robot or server can create an action plan that a robot may implement”: Rationale: Field mapping and subsequent action planning describe autonomous route planning and guidance, meeting the requirement that embedded algorithms provide navigation) and decision-making (see at least Abstract “a computer or artificial intelligence system that can sense and decide before acting on the work object”: Rationale: Explicit statement that AI can sense and decide before acting squarely teaches embedded decision-making capability required by this limitation element) in pest identification (see at least Abstract: “weeding… a computer or artificial intelligence system that can sense and decide… coupled with, machine vision” : Rationale: Weeding with machine vision and AI that sense and decide necessarily entails detecting weeds as pests, teaching autonomous pest identification) and control (see at least Abstract: “laser rastering” … “a robot or server can create an action plan that a robot may implement.”: Rationale: Laser rastering and executing an action plan are autonomous actuation steps applying control to pests, satisfying the claim’s control requirement); embedded servers (see at least Abstract: “a robot or server can create an action plan that a robot may implement.” Rationale: The explicit use of a “server” generating the action plan for robotic execution teaches embedded server-based planning within the system); at least one control element (see at least Abstract: “Specifically to the use of robotic armatures”: Rationale: Robotic armatures are control elements that position and actuate end-effectors; disclosing their use satisfies presence of one control element here) endowed with five degrees of freedom (see at least Abstract: “Specifically to the use of robotic armatures”: Rationale: Robotic armatures conventionally provide multiple degrees of freedom; selecting five degrees for crop access is a predictable configuration per Santhosh’s arm context), three degrees of freedom of rotation freedom (see at least Abstract: “Specifically to the use of robotic armatures”: Rationale: Articulating arms inherently include multiple rotational joints; three rotational axes are a routine arm design within Santhosh’s articulated framework), and two degrees of freedom of translation (see at least Abstract: “Specifically to the use of robotic armatures”: Rationale: Translational positioning is achieved via joint combinations in articulated arms; Santhosh’s actuated hands provide translational degrees required), comprising a distal end (see at least Pg. 7 of 21, ll. 11-13: “Each arm may consist of one or greater cameras and/or an embedded processor… and an give up effector which offers further movement.”: Rationale: Presence of an “end effector” identifies the arm’s distal end, expressly meeting the distal-end requirement for the control element) with at least one 360 camera (see at least pg. 10 of 21, ll. 26-27: “cameras… allow perspectives all round and even in the plant…”: Rationale: Cameras providing “perspectives all round” support panoramic viewing; substituting a 360 camera is a known panoramic embodiment within Santhosh’s camera disclosure); and at least one of: a laser device or a suction pump (see at least Fig. 1: “DC sprayer pump: Rationale: Santhosh discloses a sprayer pump, which satisfies the claim’s disjunctive tool requirement);
However, Santhosh does not explicitly disclose a horizontal structural base; at least two front support elements affixed to the horizontal structural base, wherein each of the at least two front support elements has a means of locomotion; at least two rear support elements affixed to the horizontal structural base, wherein each of the at least two rear support elements has a means of locomotion; at least one in-use signaling device; and at least one positioning and location device on top of the horizontal structural base.
Letsky discloses a horizontal structural base (see at least [0023]: “a chassis 101, a plurality of wheels 102 mounted to the chassis 101”: Rationale: A “chassis” is the horizontal structural base; Letsky expressly discloses chassis to which wheels and modules are mounted, satisfying this element); at least two front support elements (see at least [0023]: “a plurality of wheels 102 mounted to the chassis 101”: Rationale: A plurality of wheels on a chassis inherently includes front supports; conventional mobile chassis provide at least two front wheel supports) affixed to the horizontal structural base (see at least [0023]: “mounted to the chassis 101”: Rationale: “Mounted to the chassis” explicitly satisfies “affixed to the horizontal structural base” for the front support elements), wherein each of the at least two front support elements (see at least [0023]: “a plurality of wheels 102”: Rationale: Each front support element corresponds to a wheel assembly; plurality implies multiple discrete supports at the front positions) has a means of locomotion (see at least [0023]: “wheels 102… drive system 560”: Rationale: Wheels driven by a drive system are a classic locomotion means; each front wheel support provides locomotion as required) at least two rear support elements (see at least [0023]: “a plurality of wheels 102 mounted to the chassis 101”: Rationale: A multi-wheel chassis inherently includes rear supports; plurality encompasses rear wheel supports on the horizontal base) affixed to the horizontal structural base (see at least [0023]: “mounted to the chassis 101”: Rationale: Rear wheels are likewise “mounted to the chassis,” meeting the requirement that the rear support elements are affixed to the base), wherein each of the at least two rear support elements (see at least [0023]: “a plurality of wheels 102”: Rationale: Each rear support element corresponds to an individual wheel assembly; the plurality provides at least two rear support elements) has a means of locomotion (see at least [0023]: “drive system 560… wheels 102”: Rationale: The drive system actuates the wheels, providing locomotion for each rear support element as claimed); at least one in-use signaling device (see at least [0045]: “Finally, the alarm may be coupled to the CPU 501… The alarm 519 may be activated… This will signal to a person in the vicinity of the autonomous robot 199…: Rationale: Letsky’s system includes operational alerts/communications indicating state; an alarm or indicator constitutes an in-use signaling device during operation); and at least one positioning and location device on top of the horizontal structural base (see at least [0039] - [0040]: “an LOI module 540… comprises a global positioning system (GPS) sensor 507, a compass 508 and an accelerometer 509… is configured to determine the location and orientation of the chassis 10 : Rationale: Letsky explicitly discloses the positioning/location device (LOI with GPS/compass/accelerometer). A POSITA would mount the GPS/LOI on top of the chassis to ensure clear sky view, minimize multipath/occlusion, and maximize positioning accuracy— a conventional placement that doesn’t alter operation)
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Santhosh and Letsky before them, to implement Santhosh’s AI/vision/arm agricultural robot on a proven mobile base by adopting Letsky’s explicit chassis with wheels and alarm for in-use signaling and incorporating its LOI/GPS module—conventionally top-mounted for sky view—thereby satisfying the base/support/locomotion, signaling, and positioning limitations without changing Santhosh’s principle of operation or requiring invention.
However, Santhosh and Letsky do not explicitly disclose at least two lateral depth cameras.
Koselka discloses at least two lateral depth cameras (see at least [0105]: “Several stereo camera pairs may be located around the perimeter of the platform. These camera pairs are shown in FIG. 1 as “Stereo Cameras Around Robot Body”. These cameras enable the robot to view a significant area at all times. The robot may use these cameras to navigate through the fields and to map the fruit and vegetables located near the outside of the plants.”: Rationale: Explicitly teaches multiple stereo (depth) camera pairs placed around the perimeter (i.e., lateral placement), satisfying the limitation);
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Santhosh, Letsky, and Koselka before them, to implement Santhosh’s AI/vision/arm agricultural robot on a proven mobile base by adopting Letsky’s explicit chassis with wheels and alarm for in-use signaling, incorporating Letsky’s LOI/GPS module (conventionally top-mounted for sky view), and adding Koselka’s perimeter stereo camera pairs (lateral depth cameras), thereby satisfying the base/support/locomotion, signaling, positioning, and lateral depth-camera limitations without changing Santhosh’s principle of operation.
Regarding Claim 4,
Santhosh, Letsky, and Koselka disclose all the limitations of claim 1.
Santhosh further discloses an autonomous robot platform (see at least Abstract: “An agriculture smart robot device.”: Rationale: Explicitly identifies an autonomous agricultural robot platform, providing the context upon which the locomotion limitation), wherein the means of locomotion are wheels. (see at least pg. 13 of 31, ;;. 30-32: “Steering Operation: The two DC gear motor is used for steering operation, which is attached at rear wheel and front is ideal wheel.”: Rationale: States rear wheel and front wheel with motors for steering, establishing wheel-based locomotion as the platform’s means of movement here).
Regarding Claim 13,
Santhosh, Letsky, and Koselka disclose all the limitations of claim 1.
Letsky discloses wherein the autonomous robot platform is controlled remotely (see at least [0076]: “The WiFi module 709… allows the CPU 703 to connect to the Internet.”: Rationale: Internet-connected CPU enables command/control from a remote operator over a network, satisfying the requirement that the platform is controlled remotely) either near-field or long-distance (see at least [0076]): “The WiFi module 709… allows the CPU 703 to connect to the Internet.”: Rationale: Local WiFi supports near-field control; Internet connectivity provides long-distance control. Letsky’s WiFi-to-Internet explicitly enables both control ranges).
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Santhosh, Letsky, and Koselka before them, to implement Letsky’s WiFi-to-Internet connectivity on Santhosh’s autonomous platform to enable remote control over local WiFi (near-field) and Internet (long-distance)—a routine integration yielding predictable operational benefits without altering Santhosh’s principle of operation or requiring invention.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Santhosh, in view of Letsky, in view of Koselka, and in view of Pichlmaier et al. (US20170336787A1), herein after will be referred to as Pichlmaier.
Regarding Claim 2,
Santhosh, Letsky, and Koselka disclose all the limitations of claim 1.
Santhosh further discloses wherein the artificial intelligence algorithm (see at least Abstract: “a computer or artificial intelligence system that can sense and decide before acting on the work object”: Rationale: Explicitly recites an artificial intelligence system making decisions, satisfying the requirement for an AI algorithm driving platform behavior during navigation) for navigation (see at least Abstract: “A robot moves through a field first to ‘map’ the plant locations… Once the map is complete, a robot or server can create an action plan that a robot may implement.”: Rationale: Mapping and action planning describe autonomous route selection and guidance, meeting the limitation that algorithms are for navigation of platform) processes GPS information (see at least pg. 11 of 21, ll. 12-13: “A robot… may also include a GPS”: Rationale: Including a GPS provides positioning data processed by onboard algorithms, directly meeting the requirement to process GPS information for navigation)
However, Santhosh does not explicitly disclose navigation processes RTK base corrections and images from at least two depth-sensing cameras for the movement of the autonomous robot platform.
Koselka further discloses navigation processes images (see at least [0105]: “These cameras enable the robot to view a significant area at all times. The robot may use these cameras to navigate through the fields…”: Rationale: States cameras provide imagery used for navigation, establishing that images are processed by the navigation algorithm as claimed during movement) from at least two depth-sensing cameras (see at least [0105]: “Several stereo camera pairs may be located around the perimeter of the platform.”: Rationale: Stereo camera pairs are depth-sensing; several pairs provide at least two depth cameras, satisfying multi-camera requirement for navigation imaging needs) for the movement of the autonomous robot platform (see at least [0105] :“The robot may use these cameras to navigate through the fields…”: Rationale: Explicitly links camera outputs to navigation through fields, establishing sensor processing used to drive movement of the platform during operation).
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Santhosh, Letsky, and Koselka before them, to implement Santhosh’s AI navigation (using GPS) on Letsky’s mobile chassis while incorporating Koselka’s stereo depth-camera imagery for field movement—an expected integration improving path planning and obstacle avoidance without changing Santhosh’s principle of operation or requiring invention.
However, Santhosh, Letsky, and Koselka do not explicitly disclose navigation processes RTK base corrections
Pichlmaier discloses navigation processes RTK base corrections (see at least [0024]: “Positional guidance … is provided by a global navigation satellite system (GNSS) with real-time kinematic (RTK) enhancement… The RTK reference station or beacon 18 … relays the RTK-derived position correction data 19 to the robots 12.”: Rationale: Paragraph explicitly discloses RTK reference station delivering correction data to robots, directly satisfying requirement for RTK base corrections processing input)
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Santhosh, Letsky, Koselka, and Pichlmaier before them, to implement Santhosh’s AI-driven navigation (using GPS) on Letsky’s wheeled chassis while incorporating Koselka’s perimeter stereo depth-camera imagery for real-time field movement and Pichlmaier’s RTK base-correction feed for centimeter-level positioning—thereby achieving predictable improvements in path planning, obstacle avoidance, and localization without changing Santhosh’s principle of operation or requiring invention.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Santhosh, in view of Letsky, in view of Koselka, in view of Wang et al. (Wang), herein after will be referred to as Wang, and in view of Zhang et al. (Zhang), herein after will be referred to as Zhang.
Regarding Claim 3,
Santhosh, Letsky, and Koselka disclose all the limitations of claim 1.
Santhosh further discloses wherein the artificial intelligence algorithm (see at least Abstract: “a computer or artificial intelligence system that can sense and decide before acting on the work object.”: Rationale: Explicitly recites an artificial intelligence system making decisions, satisfying the requirement for an AI algorithm driving platform behavior during navigation and control) for decision-making (see at least Abstract: “sense and decide before acting on the work object.”: Rationale: “Sense and decide” expressly teaches autonomous decision-making by the AI, directly meeting the claim’s requirement that the algorithm performs decision-making) and pest control (see at least Abstract: “weeding… coupled with, machine vision, laser rastering…”: Rationale: “Weeding” and “laser rastering” disclose active pest-control actions performed by the robot, satisfying the claim’s pest-control portion with explicit actuation), and detect weeds (see at least Abstract: “weeding… coupled with, machine vision…”: Rationale: “Weeding” with machine vision directly entails detecting weeds as targets, explicitly meeting the claim requirement to detect weeds during operation).
However Santhosh and Letsky do not explicitly disclose algorithm for decision-making and pest control is based on deep learning is based on deep learning, and is trained to identify pest in the form of eggs, larvae, caterpillars or insects and identify plant phenological stages.
Koselka discloses identify pest in the form of eggs, larvae, caterpillars or insects (see at least [0050]: “Detect disease and insect/mite infestations.”: Rationale: Explicit “insect/mite infestations” disclosure satisfies the disjunctive “insects” category within eggs, larvae, caterpillars, or insects listed in the limitation)
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Santhosh, Letsky, and Koselka, before them, to integrate Santhosh’s AI-driven decision-making and pest-control workflow onto Letsky’s mobile chassis while using Koselka’s insect-detection imagery; and, consistent with routine practice by the priority date, to implement the decision-making with a deep-learning classifier trained on labeled pest and phenology datasets—yielding predictable gains in recognition accuracy and control efficacy without changing Santhosh’s principle of operation or requiring invention.
However, Santhosh, Letsky, and Koselka do not explicitly disclose algorithm for decision-making and pest control is based on deep learning is based on deep learning and is trained to identify pest and identify plant phenological stages.
Wang discloses algorithm for decision-making and pest control is based on deep learning is based on deep learning (see at least Abstract: “…an end-to-end approach named PestNet for large-scale multi-class pest detection and classification based on deep learning.”: Rationale: Explicit phrase “based on deep learning” establishes algorithmic foundation as deep learning, directly satisfying the “based on deep learning” requirement) and is trained to identify pest (see at least Fig. 1 and Fig 2: “the components … will go through the prior training phase on training images before test phase.” and “Sample images of…Pest Species …”: Rationale: Explicitly states components undergo prior training on pest images to perform detection/classification, satisfying “trained to identify pests” requirement as claimed).
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Santhosh, Letsky, Koselka, and Wang before them, to implement Santhosh’s AI-driven decision-making and pest-control workflow on Letsky’s mobile chassis while using Koselka’s stereo depth-camera imagery as inputs to Wang’s deep-learning, trained multi-class pest classifier, and to extend that same deep-learning framework to phenological-stage labeling—an expected, routine adaptation yielding predictable improvements in recognition and control without changing Santhosh’s principle of operation or requiring invention.
However, Santhosh, Letsky, Koselka, and Wang do not explicitly disclose and identify plant phenological stages.
Zhang discloses and identify plant phenological stages (see at least Abstract: “…a novel convolutional neural network architecture… for the fine-grained classification of banana’s ripening stages… offers a deep indicator of banana’s ripening stage.”: Rationale: Ripening stages are plant phenological stages; CNN-based classification of ripening explicitly identifies phenology, directly satisfying the phenological-stage identification limitation requirement).
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Santhosh, Letsky, Koselka, Wang, and Zhang before them, to implement Santhosh’s AI-driven pest-control workflow on Letsky’s mobile chassis, use Koselka’s stereo depth-camera imagery for navigation inputs, employ Wang’s deep-learning, trained multi-class pest classifier for decision-making and pest control, and extend Zhang’s deep-learning phenological-stage labeling—thereby achieving predictable improvements in recognition and control without changing Santhosh’s principle of operation or requiring invention.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Santhosh, in view of Letsky, in view of Koselka, and in view of Nooman et al. (US5,204,814 A), herein after will be referred to as Nooman.
Regarding Claim 5,
Santhosh, Letsky, and Koselka disclose all the limitations of claim 4.
Santhosh discloses an autonomous robot platform (see at least Abstract: “An agriculture smart robot device.”: Rationale: Explicitly identifies an autonomous agricultural robot platform, providing the context upon which the locomotion limitation), wherein the wheels include front wheels and rear wheels (see at least pg. 13 of 31, ;;. 30-32: “Steering Operation: The two DC gear motor is used for steering operation, which is attached at rear wheel and front is ideal wheel.”: Rationale: Explicitly reference to a “front wheel” and “rear wheel” expressly discloses the presence of front and rear wheels on the platform, meeting this requirement.).
However, Santhosh, Letsky, and Koselka do not disclose wherein the front wheels are powered, and wherein the rear wheels are free-swiveling casters.
Nooman discloses wherein the front wheels are powered (see at least col. 2, ll. 63-66: “two non powered trailing wheels which swivel.”: Rationale: Explicitly states the front wheels are “drive” wheels and “powered,” directly satisfying the requirement that the front wheels are powered), and wherein the rear wheels are free-swiveling casters (see at least col. 2, ll. 63-66: “two front drive wheels… Both front wheels are powered by their own electric servo gear motor.”: Rationale: Non powered” and “which swivel” together describe free-swiveling caster wheels at the rear, meeting the rear caster limitation verbatim).
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Santhosh, Letsky, Koselka, and Nooman before them, to implement Santhosh’s platform on a conventional wheeled undercarriage using powered front wheels and rear free-swiveling casters as taught by Nooman (with Letsky’s chassis context), achieving predictable gains in maneuverability, traction, and tight turning in crop rows without changing Santhosh’s principle of operation or requiring invention.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Santhosh, in view of Letsky, in view of Koselka, in view of Nooman, and in view of Willet et al. (US-4263977-A), herein after will be referred to as Willet.
Regarding Claim 6,
Santhosh, Letsky, Koselka, and Nooman disclose all the limitations of claim 5.
Nooman discloses wherein the front wheels are powered (see at least col. 2, ll. 63-65: “The autonomous lawn mower includes two front drive wheels…Both front wheels are powered...”: Rationale: Explicitly states both front wheels are powered by electric servo gear motors, satisfying the requirement that front wheels are powered)
However, Santhosh, Letsky, Koselka, and Nooman do not explicitly disclose the front wheels are powered by an engine, through a chain drive system.
Willet discloses the front wheels are powered by an engine (see at least col. 3, ll. 35-42 :“A coupling 56 connects a gear reducer 57 adjoining the engine 34 with a transmission 58… to transmit propelling force to the front wheels 41, 42.”: Rationale: Engine 34 couples through reducer and transmission, transmitting propelling force to front wheels 41, 42, directly satisfying engine-powered front wheels), through a chain drive system (see at least col. 3, ll. 37-42: “Opposite output shafts 59… have sprocket gears 60… These sprocket gears 60 mesh with parallel chains 61… engage further sprocket gears 62… to transmit propelling force to the front wheels 41, 42.”: Rationale: Sprocket gears drive parallel chains 61 engaging sprocket gears 62, forming a chain drive that transmits propulsion to front wheels).
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Santhosh, Letsky, Koselka, Nooman and Willett before them, to implement Santhosh’s platform on Letsky’s wheeled chassis using Nooman’s front-wheel drive architecture while substituting Willett’s engine-to-gearbox-to-sprocket/chain transmission to drive the front wheels—achieving the claimed engine-driven chain propulsion with predictable benefits and without changing Santhosh’s principle of operation or requiring invention.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Santhosh, in view of Letsky, in view of Koselka, and in view of Melone et al. (US-20050144923-A1), herein after will be referred to as Melone.
Regarding Claim 7,
Santhosh, Letsky, and Koselka disclose all the limitations of claim 1.
However, Santhosh, Letsky, and Koselka do not explicitly disclose wherein each of the at least two front support elements, and each of the at least two rear support elements, are fitted with a shock absorber system.
Melone discloses wherein each of the at least two front support elements (See at least[0153]: “the shock absorber 502 of each front independent suspension system 516 …”: Rationale: “Each front independent suspension system” clearly indicates two separate front supports, satisfying the requirement for at least two front support elements), and each of the at least two rear support elements (See at least [0181]: “…the rear wheel suspension assembly 632 includes on the opposite sides of the frame 608 a first link 768 and a second link 772 controlling upward and downward movement of the respective rear wheels 616.”: Rationale: Opposite-side links controlling “respective rear wheels” disclose two independent rear supports, satisfying “each of the at least two rear support elements.”), are fitted (See at least[0173]: “Some embodiments of each front wheel independent suspension assembly 628… have shock absorbers 728 and/or suspension springs 732.”: Rationale: “Have shock absorbers” shows the assemblies are equipped—i.e., fitted—with damping components, directly meeting the “are fitted” requirement in the claim) with a shock absorber system (See at least[0187]): “Shock absorber and spring assemblies 808 can be used with the rear wheel suspension assembly 632 …”: Rationale: “Shock absorber and spring assemblies” constitute a shock-absorber system installed with the rear suspension assembly, satisfying the “with a shock absorber system” requirement).
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Santhosh, Letsky, Koselka, and Melone before them, to implement Melone’s shock-absorber/suspension assemblies on each front and each rear support of the Santhosh/Letsky/Koselka platform to improve terrain compliance, stability, and end-effector accuracy— a routine integration yielding predictable results