DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed 05/02/2025 is being entered. Claims 1 is amended. Claims 1-20 are pending, and rejected as detailed below. This action is final as necessitated by amendment.
Claim Objections
Amendment to claims 1 is entered. Therefore the claim objection for claims 1 has been withdrawn.
Response to Arguments
Claim Rejections under 35 U.S.C. §103
Independent claims 1 and 12:
Applicant argues that Ferrari does not teach or suggest that "capture a sequence of images of a field- of-view (FOV) of a defined area of a field when the vehicle is in motion at the determined plurality of time instants ". Ferrari discloses that an imaging device has a field of view configured to capture images of an adjacent area or portion of the field disposed along the side of the work vehicle. However, Ferrari does not disclose capturing a sequence of images of an FOV of a defined area while the vehicle is in motion. Ferrari merely discloses capturing images of an adjacent area or portion of the field disposed along the side of the work vehicle, and is silent on capturing an FOV when the vehicle is moving or in motion. Ferrari is also silent on capturing FOV when the vehicle is in motion at the determined plurality of time instants.
Applicant’s arguments with respect to the rejections of claims 1 and 12 under 35 U.S.C. §103 have been fully considered and not persuasive as Ferrari disclose the motion of the vehicle (paragraph 0019; “In several embodiments, the imaging device(s) may be configured to capture side view images of the field from its installed location on the work vehicle or the implement. For instance, the imaging device(s) may be installed on the work vehicle or the implement such that the imaging device(s) has a field of view directed towards the portion(s) of the field passing along one or both sides of the work vehicle/implement as the tillage operation is being performed (e.g., in a direction generally perpendicular to the direction of travel of the work vehicle)”). Furthermore, Ferrari also disclose the plurality of time instants (paragraph 0040; “Additionally, in several embodiments, the location data stored within the location database 122 may also be correlated to the image data stored within the image database 118. For instance, in one embodiment, the location coordinates derived from the positioning device(s) 124 and the image(s) captured by the imaging device(s) 104 may both be time-stamped.”)
Applicant also argues that White does not teach or suggest that "generate a bounding box around the crop plant being tracked and detect a plurality of buffer values associated with the image-sensor ". White discloses that the bounding box is generated from a bounding algorithm that indicates the range of pixels indicative of the 3D model of the object. The bounding algorithm identifies the minimum and maximum points along the dimensional axes. However, White does not disclose generating a bounding box around a crop plant to track the crop plant. Instead, White discloses that the bounding box is drawn around the cat object and is entirely silent on generating a bounding box around a crop plant. Furthermore, White discloses displaying a 3D rectangular box for an operator to view, allowing the operator to rotate and position the rectangular box to view the 3D model of the object from various perspectives. However, White does not disclose generating a bounding box to provide the geospatial location of an object, such as a crop plant. Consequently, White also fails to disclose that the control device is configured to detect a plurality of buffer values.
Applicant’s arguments with respect to the rejections of claims 1 and 12 under 35 U.S.C. §103 have been fully considered and not persuasive. More specifically, White disclose the bounding box (paragraph 0102; “In an embodiment, the bounding box 1306 is presented as a two dimensional rectangle around an image, such as when the bounding box 1306 drawn around the cat object is presented to a user.”). Furthermore, Ferrari teaches about detecting and tracking the Crop plant via the images in paragraph 0042, and White teach about generating a bounding box around a cat. As a result, All the claimed elements were known in the prior art ( capturing image data and generating bounding box) and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded predictable results to one of ordinary skill in the art at the time of the invention. Furthermore, the claim also would have been obvious because the substitution of one known element (image of the cat) for another (image of the crop) would have yielded predictable results to one of ordinary skill in the art. Furthermore, according to the BRI of the claim limitation, examiner does not have to provide a reference for the geospatial location of the object. However, for clarity purpose, examiner wants to point out that White also teach about finding the location of the object in paragraph 0058 and 0060. Furthermore, White also teach about the buffer values (paragraph 0102; “A bounding algorithm can apply a buffer value such that a resulting bounding box 1306 is larger than the 3D model of the object 1302. For example, the value of minimum points may be reduced and the value of maximum points may be increased.”).
Applicant argues that Ferrari does not teach or suggest that "cause an implement attached to the vehicle to perform a predefined action on the crop plant based on the generated bounding box and the detected plurality of buffer values". Ferrari discloses an associated configured to estimate crop residue parameter associated with the imaged portion of the field based on the estimated crop residue. However, Ferrari fails to disclose that the control device is configured to cause an implement attached to the vehicle to perform a predefined action on the crop plant based on the generated bounding box and the detected plurality of buffer values. Moreover, Ferrari only estimates the crop residue parameter based on estimated crop residue, but is silent on performing a predefined action on the crop plant based on the generated bounding box.
Applicant’s arguments with respect to the rejections of claims 1 and 12 under 35 U.S.C. §103 have been fully considered and not persuasive as Ferrari teaches about an implement attached to the vehicle to perform a predefined action on the crop plant based on the generated bounding box and the detected plurality of buffer values (Ferrari 0031; “Based on the estimated crop residue parameter, the controller 102 may then control/adjust the operation of the tillage implement 12, as necessary, to maintain the crop residue parameter at a given target value and/or within a given target range (e.g., an operating ranged defined around a target crop residue percentage set for the field).”. wherein the tillage implement 12 is the implement attached to the vehicle and the predefined action is the controlling or adjusting the operation of the tillage implement 12). Applicant is correct to state that Ferrari only estimates the crop residue parameter based on estimated crop residue, but is silent on performing a predefined action on the crop plant based on the generated bounding box. However, examiner wants to point out that the combination of Ferrari and White teaches about “perform a predefined action on the crop plant based on the generated bounding box and the detected plurality of buffer values”. Furthermore, MPEP 2145 (IV) state One cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. Where a rejection of a claim is based on two or more references, a reply that is limited to what a subset of the applied references teaches or fails to teach, or that fails to address the combined teaching of the applied references may be considered to be an argument that attacks the reference(s) individually.
Dependent claims 2-11 and 13-20:
Applicant argues that that dependent claims 2-11 and 13-20 are allowable at least by virtue of their dependency on independent claims 1 and 12.
Applicant’s arguments with respect to the rejections of claims 2-11 and 13-20 under 35 U.S.C. §103 have been fully considered and not persuasive as claim 1 and 12 are unpatentable over Ferrari in view of White and Sergeev.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 4-6, 12, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Ferrari (US 20180210450 A1), and further in view of White (US 20200302241 A1) and SERGEEV (US 20230137419 A1).
Regarding claim 1, Ferrari teaches (Currently Amended) A modular apparatus mounted in a vehicle (Ferrari, at least one para. 0031; “As shown in FIG. 1, in one embodiment, an imaging device 104 may be coupled to one of the sides of the work vehicle 10 such that the imaging device 104 has a field of view 106 that allows it to capture images of an adjacent area or portion 108 of the field disposed along the side of the work vehicle 10. ”), comprising:
an image-capture device that comprises a printed circuit board (PCB) having a perforation to accommodate an image-sensor (Ferrari, at least one para. 0030; “the imaging device(s) may correspond to any suitable camera(s), such as single-spectrum camera or a multi-spectrum camera configured to capture images in the visible light range and/or infrared spectral range. Additionally, in a particular embodiment, the camera(s) may correspond to a single lens camera configured to capture two-dimensional images or a stereo camera(s) having two or more lenses with a separate image sensor for each lens to allow the camera(s) to capture stereographic or three-dimensional images.”, wherein the PCB is inherent within an image-capturing device)
a control device configured to control the image-capture device (Ferrari, at least one para. 0031; “As will be described below, by analyzing the images captured by the imaging device 104, an associated controller 102 (FIG. 3) may be configured to estimate a crop residue parameter associated with the imaged portion(s) of the field (e.g., a percent crop residue coverage).”), wherein the control device is further configured to:
determine a plurality of time instants at which the image-sensor (Ferrari, at least one para. 0040; “the location coordinates derived from the positioning device(s) 124 and the image(s) captured by the imaging device(s) 104 may both be time-stamped.”)
capture a sequence of images of a field-of-view (FOV) (Ferrari, at least one para. 0037; “the imaging device(s) 104 may be configured to continuously or periodically capture side view images of adjacent portion(s) of the field as the tillage operation is being performed. In such an embodiment, the images transmitted to the controller 102 from the imaging device(s) 104 may be stored within the image database 118 for subsequent processing and/or analysis.”) of a defined area of a field (Ferrari, at least one para. 0031; “the imaging device 104 has a field of view 106 that allows it to capture images of an adjacent area or portion 108 of the field disposed along the side of the work vehicle 10. ”) when the vehicle is in motion at the determined plurality of time instants (Ferrari, at least one para. 0037; “the imaging device(s) 104 may be configured to continuously or periodically capture side view images of adjacent portion(s) of the field as the tillage operation is being performed. In such an embodiment, the images transmitted to the controller 102 from the imaging device(s) 104 may be stored within the image database 118 for subsequent processing and/or analysis.”) and (Ferrari, at least one para. 0019; “In several embodiments, the imaging device(s) may be configured to capture side view images of the field from its installed location on the work vehicle or the implement. For instance, the imaging device(s) may be installed on the work vehicle or the implement such that the imaging device(s) has a field of view directed towards the portion(s) of the field passing along one or both sides of the work vehicle/implement as the tillage operation is being performed (e.g., in a direction generally perpendicular to the direction of travel of the work vehicle)”, wherein the direction of travel of the work vehicle teaches the vehicle is in motion) and (Ferrari, at least one para. 0040; “Additionally, in several embodiments, the location data stored within the location database 122 may also be correlated to the image data stored within the image database 118. For instance, in one embodiment, the location coordinates derived from the positioning device(s) 124 and the image(s) captured by the imaging device(s) 104 may both be time-stamped.”, wherein the plurality of time stamped teaches the plurality of time instants);
detect and track a crop plant in the defined area from the captured sequence of images (Ferrari, at least one para. 0042; “Referring still to FIG. 3, in several embodiments, the instructions 116 stored within the memory 112 of the controller 102 may be executed by the processor(s) 110 to implement an image analysis module 126. In general, the image analysis module 126 may be configured to analyze the images received by the imaging device(s) 104 to allow the controller 102 to estimate one or more crop residue parameters associated with the field currently being tilled.”)
generate a bounding box around the crop plant being tracked and detect a plurality of buffer values associated with the image-sensor; and
cause an implement attached to the vehicle to perform a predefined action on the crop plant based on the generated bounding box and the detected plurality of buffer values (Ferrari, at least one para. 0031; “Based on the estimated crop residue parameter, the controller 102 may then control/adjust the operation of the tillage implement 12, as necessary, to maintain the crop residue parameter at a given target value and/or within a given target range (e.g., an operating ranged defined around a target crop residue percentage set for the field).”).
Ferrari does not explicitly teach that wherein the PCB comprises a plurality of layers of strobe-lights, wherein each layer of strobe-lights is distributed on the PCB around the perforation to surround the image-sensor when mounted on the PCB; and
as well as the plurality of layers of strobe-lights are to be activated;
using a pre-trained artificial intelligence model;
generate a bounding box around the crop plant being tracked and detect a plurality of buffer values associated with the image-sensor; and
However, SERGEEV in the same field of endeavor (SERGEEV, at least one para. 0074; “In some embodiments, a high intensity illumination system may be positioned on or part of a device, such as a vehicle. teaches wherein the PCB (SERGEEV, at least one para. 0077; “A control system 700 may comprise a computer 701 configured to control a strobe circuit system 710 comprising a strobe control printed circuit board (PCB) 702. ”) comprises a plurality of layers of strobe-lights (SERGEEV, at least one para. 0077; “The strobe PCB may provide a strobe signal (e.g., a pulsed voltage) to a lighting array 703 (e.g., an LED array). The strobe control PCB 702 may further provide a camera trigger signal to one or more cameras 704.”), wherein each layer of strobe-lights is distributed on the PCB around the perforation to surround the image-sensor when mounted on the PCB (SERGEEV, at least one para. 0077; “The strobe PCB may provide a strobe signal (e.g., a pulsed voltage) to a lighting array 703 (e.g., an LED array). The strobe control PCB 702 may further provide a camera trigger signal to one or more cameras 704.”) and (SERGEEV, at least one para. 0068; “A high intensity illumination system may evenly illuminate a region of interest (e.g., a region on a surface underneath a vehicle) such that an image collected of the region of interest”); and
as well as the plurality of layers of strobe-lights are to be activated (SERGEEV, at least one para. 0077; “The high intensity illumination systems described herein may be controlled by a control system. The control system may control power to the light emitters (e.g., LEDs) of a lighting array and synchronize an on/off state of the LEDs to a camera shutter or exposure.”);
using a pre-trained artificial intelligence model;
generate a bounding box around the crop plant being tracked and detect a plurality of buffer values associated with the image-sensor; and
The combination of Ferrari and SERGEEV are considered to be analogous to the claimed invention because both of them are in the same field as modular image-capturing apparatus as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the image capturing device of the Ferrari with teaching of SERGEEV. One of the ordinary skill in the art would have been motivated to make this modification so that the region of surface can be evenly illuminated with respect to different lighting conditions (SERGEEV, at least one para. 0072- 0075).
The combination of Ferrari and SERGEEV does not explicitly teach that using a pre-trained artificial intelligence model;
generate a bounding box around the crop plant being tracked and detect a plurality of buffer values associated with the image-sensor; and
However, White in the same field of endeavor (White, at least one para. 0023; “The present document describes a system that produces training data for a machine learning system. In an embodiment, the system obtains video of an object within an environment. The object and/or camera are moved relative to one another to capture a variety of images of the object from different angles and under different lighting conditions.”) teaches using a pre-trained artificial intelligence model (White, at least one para. 0060; “In one example, the machine learning model and image data is provided to the prediction system 602. A machine learning training subsystem 608 of the prediction system 602 uses the image data and the machine learning model to label one or more objects present in the image. The label information and the image data is provided to the pose estimate generation system 604 which filters the image data 610 and then estimates 612 the orientation or pose of the object in the image.”);
generate a bounding box around the crop plant being tracked and detect a plurality of buffer values associated with the image-sensor (White, at least one para. 0102; “In an embodiment, the bounding box 1306 is presented as a two dimensional rectangle around an image, such as when the bounding box 1306 drawn around the cat object is presented to a user. For example, a 3D viewer computer application displays a 3D rectangular box for an operator to view, where the operator can rotate and position the rectangular box so as to view the 3D model of the object 1302 from various perspectives. A bounding algorithm can apply a buffer value such that a resulting bounding box 1306 is larger than the 3D model of the object 1302. For example, the value of minimum points may be reduced and the value of maximum points may be increased.”); and
The combination of Ferrari, SERGEEV, and White are considered to be analogous to the claimed invention because both of them are in the same field as modular image-capturing apparatus as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the detect and track a crop plant with general computer-vision algorithm of the Ferrari with teaching of White. One of the ordinary skill in the art would have been motivated to make this modification so that the output efficiency and versatility can be improved while reducing the reliance on simulation’s ability to perfectly simulate real-world conditions (White, at least one para. 0076).
Regarding claim 4, Ferrari teaches (Original) The modular apparatus according to claim 1, wherein the control device is further configured to update the FOV of the image-sensor to a defined value (Ferrari, at least one para. 0055; “As described herein, the field of view of an imaging device may be directed generally perpendicular of the travel direction 20 of the work vehicle if the center of the field of view is directly along a view path having an angle of orientation defined relative to a reference line or plane extending perpendicular to the travel direction 20 that falls within an angular range of from about +/−25 degrees, such as from about +/−20 degrees or from about +/−10 degrees and/or any other subranges therebetween.”), and wherein the plurality of time instants is re-determined based on the updated FOV of the image-sensor (Ferrari, at least one para. 0040; “the location coordinates derived from the positioning device(s) 124 and the image(s) captured by the imaging device(s) 104 may both be time-stamped. In such an embodiment, the time-stamped data may allow each image captured by the imaging device(s) 102 to be matched or correlated to a corresponding set of location coordinates received from the positioning device(s) 124, thereby allowing the precise location of the portion of the field depicted within a given image to be known (or at least capable of calculation) by the controller 102.”, it is inherent that time changes with respect to the angular view of the FOV changes).
Regarding claim 5, SERGEEV teaches (Original) The modular apparatus according to claim 1, wherein the PCB has a first surface and a second surface opposite the first surface (SERGEEV, at least one para. 0077; “A control system 700 may comprise a computer 701 configured to control a strobe circuit system 710 comprising a strobe control printed circuit board (PCB) 702.”, it is inherent that the PCB has two oppositely positioned surfaces”), and wherein the plurality of layers of strobe-lights are arranged on the first surface and a plurality of capacitors are arranged on the second surface of the PCB (SERGEEV, at least one para. 0075; “FIG. 6A shows bottom view of different LED configurations for a lighting array 150. FIG. 6B shows a bottom view of a vehicle 100 equipped with multiple lighting arrays 150 to illuminate a surface underneath the vehicle.”, It is inherent and obvious that the capacitors are on the other side of the PCB and oppositely positioned of the lighting arrays to be protected from dust and debris”).
Regarding claim 6, SERGEEV teaches (Original) The modular apparatus according to claim 5, wherein the plurality of capacitors of the PCB are configured to supply power to the plurality of layers of strobe-lights at the determined plurality of time instants (SERGEEV, at least one para. 0077; “the capacitors 706 may provide pulsed power to the light emitters of a lighting array 150 while the power generation provides sustained power by charging the capacitors over time while the light emitters are off and discharging the capacitors to turn the light emitters on.”).
Regarding claim 12, Ferrari teaches (Original) A method of operation of a modular apparatus mounted in a vehicle (Ferrari, at least one para. 0031; “As shown in FIG. 1, in one embodiment, an imaging device 104 may be coupled to one of the sides of the work vehicle 10 such that the imaging device 104 has a field of view 106 that allows it to capture images of an adjacent area or portion 108 of the field disposed along the side of the work vehicle 10. ”), comprising:
in a modular apparatus (Ferrari, at least one para. 0030; “the imaging device(s) may correspond to any suitable camera(s), such as single-spectrum camera or a multi-spectrum camera configured to capture images in the visible light range and/or infrared spectral range. Additionally, in a particular embodiment, the camera(s) may correspond to a single lens camera configured to capture two-dimensional images or a stereo camera(s) having two or more lenses with a separate image sensor for each lens to allow the camera(s) to capture stereographic or three-dimensional images.”, wherein the PCB is inherent within an image-capturing device):
determining a plurality of time instants at which an image-sensor of the modular apparatus (Ferrari, at least one para. 0040; “the location coordinates derived from the positioning device(s) 124 and the image(s) captured by the imaging device(s) 104 may both be time-stamped.”)
capturing a sequence of images of a field-of-view (FOV) (Ferrari, at least one para. 0037; “the imaging device(s) 104 may be configured to continuously or periodically capture side view images of adjacent portion(s) of the field as the tillage operation is being performed. In such an embodiment, the images transmitted to the controller 102 from the imaging device(s) 104 may be stored within the image database 118 for subsequent processing and/or analysis.”) of a defined area of a field (Ferrari, at least one para. 0031; “the imaging device 104 has a field of view 106 that allows it to capture images of an adjacent area or portion 108 of the field disposed along the side of the work vehicle 10. ”) when the vehicle is in motion at the determined plurality of time instants (Ferrari, at least one para. 0037; “the imaging device(s) 104 may be configured to continuously or periodically capture side view images of adjacent portion(s) of the field as the tillage operation is being performed. In such an embodiment, the images transmitted to the controller 102 from the imaging device(s) 104 may be stored within the image database 118 for subsequent processing and/or analysis.”) and (Ferrari, at least one para. 0019; “In several embodiments, the imaging device(s) may be configured to capture side view images of the field from its installed location on the work vehicle or the implement. For instance, the imaging device(s) may be installed on the work vehicle or the implement such that the imaging device(s) has a field of view directed towards the portion(s) of the field passing along one or both sides of the work vehicle/implement as the tillage operation is being performed (e.g., in a direction generally perpendicular to the direction of travel of the work vehicle)”, wherein the direction of travel of the work vehicle teaches the vehicle is in motion) and (Ferrari, at least one para. 0040; “Additionally, in several embodiments, the location data stored within the location database 122 may also be correlated to the image data stored within the image database 118. For instance, in one embodiment, the location coordinates derived from the positioning device(s) 124 and the image(s) captured by the imaging device(s) 104 may both be time-stamped.”, wherein the plurality of time stamped teaches the plurality of time instants);
detecting and tracking a crop plant in the defined area from the captured sequence of images (Ferrari, at least one para. 0042; “Referring still to FIG. 3, in several embodiments, the instructions 116 stored within the memory 112 of the controller 102 may be executed by the processor(s) 110 to implement an image analysis module 126. In general, the image analysis module 126 may be configured to analyze the images received by the imaging device(s) 104 to allow the controller 102 to estimate one or more crop residue parameters associated with the field currently being tilled.”)
generating a bounding box around the crop plant being tracked and detect a plurality of buffer values associated with the image-sensor; and
causing an implement attached to the vehicle to perform a predefined action on the crop plant based on the generated bounding box and the detected plurality of buffer values (Ferrari, at least one para. 0031; “Based on the estimated crop residue parameter, the controller 102 may then control/adjust the operation of the tillage implement 12, as necessary, to maintain the crop residue parameter at a given target value and/or within a given target range (e.g., an operating ranged defined around a target crop residue percentage set for the field).”).
Ferrari does not explicitly teach that as well as a plurality of layers of strobe-lights (SERGEEV, at least one para. 0077; “The strobe PCB may provide a strobe signal (e.g., a pulsed voltage) to a lighting array 703 (e.g., an LED array). The strobe control PCB 702 may further provide a camera trigger signal to one or more cameras 704.”) around the image-sensor are to be activated (SERGEEV, at least one para. 0077; “The high intensity illumination systems described herein may be controlled by a control system. The control system may control power to the light emitters (e.g., LEDs) of a lighting array and synchronize an on/off state of the LEDs to a camera shutter or exposure.”);
using a pre-trained artificial intelligence model;
generating a bounding box around the crop plant being tracked and detect a plurality of buffer values associated with the image-sensor; and
The combination of Ferrari and SERGEEV are considered to be analogous to the claimed invention because both of them are in the same field as modular image-capturing apparatus as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the image capturing device of the Ferrari with teaching of SERGEEV. One of the ordinary skill in the art would have been motivated to make this modification so that the region of surface can be evenly illuminated with respect to different lighting conditions (SERGEEV, at least one para. 0072- 0075).
The combination of Ferrari and SERGEEV does not explicitly teach that using a pre-trained artificial intelligence model;
generating a bounding box around the crop plant being tracked and detect a plurality of buffer values associated with the image-sensor; and
However, White in the same field of endeavor (White, at least one para. 0023; “The present document describes a system that produces training data for a machine learning system. In an embodiment, the system obtains video of an object within an environment. The object and/or camera are moved relative to one another to capture a variety of images of the object from different angles and under different lighting conditions.”) teaches using a pre-trained artificial intelligence model (White, at least one para. 0060; “In one example, the machine learning model and image data is provided to the prediction system 602. A machine learning training subsystem 608 of the prediction system 602 uses the image data and the machine learning model to label one or more objects present in the image. The label information and the image data is provided to the pose estimate generation system 604 which filters the image data 610 and then estimates 612 the orientation or pose of the object in the image.”);
generating a bounding box around the crop plant being tracked and detect a plurality of buffer values associated with the image-sensor (White, at least one para. 0102; “In an embodiment, the bounding box 1306 is presented as a two dimensional rectangle around an image, such as when the bounding box 1306 drawn around the cat object is presented to a user. For example, a 3D viewer computer application displays a 3D rectangular box for an operator to view, where the operator can rotate and position the rectangular box so as to view the 3D model of the object 1302 from various perspectives. A bounding algorithm can apply a buffer value such that a resulting bounding box 1306 is larger than the 3D model of the object 1302. For example, the value of minimum points may be reduced and the value of maximum points may be increased.”); and
The combination of Ferrari, SERGEEV, and White are considered to be analogous to the claimed invention because both of them are in the same field as modular image-capturing apparatus as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the detect and track a crop plant with general computer-vision algorithm of the Ferrari with teaching of White. One of the ordinary skill in the art would have been motivated to make this modification so that the output efficiency and versatility can be improved while reducing the reliance on simulation’s ability to perfectly simulate real-world conditions (White, at least one para. 0076).
Regarding claim 16, Ferrari teaches (Original) The method according to claim 12, further comprising updating the FOV of the image- sensor to a defined value (Ferrari, at least one para. 0055; “As described herein, the field of view of an imaging device may be directed generally perpendicular of the travel direction 20 of the work vehicle if the center of the field of view is directly along a view path having an angle of orientation defined relative to a reference line or plane extending perpendicular to the travel direction 20 that falls within an angular range of from about +/−25 degrees, such as from about +/−20 degrees or from about +/−10 degrees and/or any other subranges therebetween.”), and wherein the plurality of time instants is re-determined based on the updated FOV of the image-sensor (Ferrari, at least one para. 0040; “the location coordinates derived from the positioning device(s) 124 and the image(s) captured by the imaging device(s) 104 may both be time-stamped. In such an embodiment, the time-stamped data may allow each image captured by the imaging device(s) 102 to be matched or correlated to a corresponding set of location coordinates received from the positioning device(s) 124, thereby allowing the precise location of the portion of the field depicted within a given image to be known (or at least capable of calculation) by the controller 102.”, it is inherent that time changes with respect to the angular view of the FOV changes).
Claim(s) 2-3, 11, and 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Ferrari (US 20180210450 A1), White (US 20200302241 A1), and SERGEEV (US 20230137419 A1) as applied to claim 1 and 12 above, respectively, and further in view of Vesperman (US 20220350991 A1).
Regarding claim 2, Ferrari teaches (Original) The modular apparatus according to claim 1, wherein the control device is further configured to automatically change an extent of an action area of the predefined action by the implement attached to the vehicle based on a change in the plurality of buffer values (Ferrari, at least one para. 0031; “Based on the estimated crop residue parameter, the controller 102 may then control/adjust the operation of the tillage implement 12, as necessary, to maintain the crop residue parameter at a given target value and/or within a given target range (e.g., an operating ranged defined around a target crop residue percentage set for the field).”).
Ferrari does not explicitly teach that wherein the control device is further configured to automatically change an extent of an action area of the predefined action by the implement attached to the vehicle based on a change in the plurality of buffer values.
However, Vesperman in the same field of endeavor (Vesperman, at least one para. 0001; “This disclosure relates generally to a detection system for vehicles, and more specifically to detecting an edge between surfaces to control the operation (e.g., automated steering) of a farming vehicle.”) teaches wherein the control device is further configured to automatically change an extent of an action area of the predefined action by the implement attached to the vehicle based on a change in the plurality of buffer values (Vesperman, at least one para. 0081; “In some embodiments, the row edge detection module 130 may filter and eliminate candidate edges from consideration as the best fit edge before applying an edge detection model. “) and (Vesperman, at least one para. 0084; “Target offset min and max may be selected based on a type of operation that the vehicle is performing and a typical bounding box location for the type of operation. For example, if the vehicle is pulling implements with narrow working widths, the target offset min and max may be small (e.g., two to five feet). In another example, if the vehicle is pulling wide tillage implements, calibration may be done with target offset min and max that are larger (e.g., fifteen to seventeen feet). A large target offset min and max may be used to calibrate a vehicle equipped with a camera having a large field of view (e.g., greater than 130 degrees) or a unique camera 3D pose.”).
The combination of Ferrari, SERGEEV, White, and Vesperman are considered to be analogous to the claimed invention because both of them are in the same field as modular image-capturing apparatus as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the given target value/range of the Ferrari with teaching of Vesperman. One of the ordinary skill in the art would have been motivated to make this modification so that the best fit edge can be automatically calculated by the processor (Vesperman, at least one para. 0081).
Regarding claim 3, White teaches (Original) The modular apparatus according to claim 1, where the plurality of buffer values (White, at least one para. 0102; “A bounding algorithm can apply a buffer value such that a resulting bounding box 1306 is larger than the 3D model of the object 1302. For example, the value of minimum points may be reduced and the value of maximum points may be increased.”)
White does not explicitly teach that comprises a front buffer value and a rear buffer value, and
wherein an increase in the front buffer value further dynamically extends an action area ahead of the crop plant in the FOV from a point of view of the image-capture device when the modular apparatus moves towards the crop plant, and
wherein an increase in the rear buffer value further dynamically extends the action area behind the crop plant in the FOV from the point of view of the image-capture device when the modular apparatus moves towards the crop plant, and
wherein the action area corresponds to a spray region comprising the crop plant.
However, Vesperman in the same field of endeavor (Vesperman, at least one para. 0001; “This disclosure relates generally to a detection system for vehicles, and more specifically to detecting an edge between surfaces to control the operation (e.g., automated steering) of a farming vehicle.”) teaches comprises a front buffer value and a rear buffer value (Vesperman, at least one para. 0084; “Target offset min and max may be selected based on a type of operation that the vehicle is performing and a typical bounding box location for the type of operation. For example, if the vehicle is pulling implements with narrow working widths, the target offset min and max may be small (e.g., two to five feet). In another example, if the vehicle is pulling wide tillage implements, calibration may be done with target offset min and max that are larger (e.g., fifteen to seventeen feet)”, wherein the target offset min is the front buffer value and the target offset max is the rear buffer value”), and
wherein an increase in the front buffer value further dynamically extends an action area ahead of the crop plant in the FOV from a point of view of the image-capture device when the modular apparatus moves towards the crop plant (Vesperman, at least one para. 0084; “Target offset min and max may be selected based on a type of operation that the vehicle is performing and a typical bounding box location for the type of operation. For example, if the vehicle is pulling implements with narrow working widths, the target offset min and max may be small (e.g., two to five feet). In another example, if the vehicle is pulling wide tillage implements, calibration may be done with target offset min and max that are larger (e.g., fifteen to seventeen feet).”), and
wherein an increase in the rear buffer value further dynamically extends the action area behind the crop plant in the FOV from the point of view of the image-capture device when the modular apparatus moves towards the crop plant (Vesperman, at least one para. 0084; “Target offset min and max may be selected based on a type of operation that the vehicle is performing and a typical bounding box location for the type of operation. For example, if the vehicle is pulling implements with narrow working widths, the target offset min and max may be small (e.g., two to five feet). In another example, if the vehicle is pulling wide tillage implements, calibration may be done with target offset min and max that are larger (e.g., fifteen to seventeen feet).”), and
wherein the action area corresponds to a spray region comprising the crop plant (Vesperman, at least one para. 0020; “Farming operations may include mowing, harvesting, spraying, tilling, etc. An example of a variation within one surface includes a soil surface with a type of crop planted and without crops planted.”).
The combination of Ferrari, SERGEEV, White, and Vesperman are considered to be analogous to the claimed invention because both of them are in the same field as modular image-capturing apparatus as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the buffer values of White with teaching of Vesperman. One of the ordinary skill in the art would have been motivated to make this modification so that the attached vehicle implement is able to maximized its functionality (Vesperman, at least one para. 0084).
Regarding claim 11, Ferrari teaches (Original) The modular apparatus according to claim 1 (Ferrari, at least one para. 0031; “As shown in FIG. 1, in one embodiment, an imaging device 104 may be coupled to one of the sides of the work vehicle 10 such that the imaging device 104 has a field of view 106 that allows it to capture images of an adjacent area or portion 108 of the field disposed along the side of the work vehicle 10. ”),
Ferrari does not explicitly teach that wherein a defined confidence threshold is set in real-time or near real-time via a user interface (UI) rendered on a display device of the vehicle, and wherein the implement attached to the vehicle is caused to perform the predefined action on the crop plant further based on the set defined confidence threshold.
However, Vesperman in the same field of endeavor (Vesperman, at least one para. 0001; “This disclosure relates generally to a detection system for vehicles, and more specifically to detecting an edge between surfaces to control the operation (e.g., automated steering) of a farming vehicle.”) teaches wherein a defined confidence threshold is set in real-time or near real-time via a user interface (UI) rendered on a display device of the vehicle (Vesperman, at least one para. 0059; “The row edge detection module 130 may provide the candidate edges to an operator (e.g., via the user interface module 123) for display at the display 112. The operator may select the best fit edge and the row edge detection module 130 may receive the user-selected best fit edge.”), and wherein the implement attached to the vehicle is caused to perform the predefined action on the crop plant further based on the set defined confidence threshold (Vesperman, at least one para. 0059; “The row edge detection module 130 may perform optional confidence score determinations associated with one or more of the candidate edges including the user-selected edge. The row edge detection module 130 may prompt the user to select another edge if the confidence score is below a threshold. The navigation module 124 may use the user-selected edge to determine a heading and/or lateral errors to modify the operation of the vehicle 110.”).
The combination of Ferrari, SERGEEV, White, and Vesperman are considered to be analogous to the claimed invention because both of them are in the same field as modular image-capturing apparatus as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the imaging device of Ferrari with teaching of Vesperman. One of the ordinary skill in the art would have been motivated to make this modification so that the vehicle can determine the heading and lateral errors (Vesperman, at least one para. 0059).
Regarding claim 13, Ferrari teaches (Original) The method according to claim 12, further comprising automatically changing an extent of an action area of the predefined action by the implement attached to the vehicle based on a change in the plurality of buffer values (Ferrari, at least one para. 0031; “Based on the estimated crop residue parameter, the controller 102 may then control/adjust the operation of the tillage implement 12, as necessary, to maintain the crop residue parameter at a given target value and/or within a given target range (e.g., an operating ranged defined around a target crop residue percentage set for the field).”).
Ferrari does not explicitly teach that further comprising automatically changing an extent of an action area of the predefined action by the implement attached to the vehicle based on a change in the plurality of buffer values.
However, Vesperman in the same field of endeavor (Vesperman, at least one para. 0001; “This disclosure relates generally to a detection system for vehicles, and more specifically to detecting an edge between surfaces to control the operation (e.g., automated steering) of a farming vehicle.”) teaches further comprising automatically changing an extent of an action area of the predefined action by the implement attached to the vehicle based on a change in the plurality of buffer values (Vesperman, at least one para. 0081; “In some embodiments, the row edge detection module 130 may filter and eliminate candidate edges from consideration as the best fit edge before applying an edge detection model. “) and (Vesperman, at least one para. 0084; “Target offset min and max may be selected based on a type of operation that the vehicle is performing and a typical bounding box location for the type of operation. For example, if the vehicle is pulling implements with narrow working widths, the target offset min and max may be small (e.g., two to five feet). In another example, if the vehicle is pulling wide tillage implements, calibration may be done with target offset min and max that are larger (e.g., fifteen to seventeen feet). A large target offset min and max may be used to calibrate a vehicle equipped with a camera having a large field of view (e.g., greater than 130 degrees) or a unique camera 3D pose.”).
The combination of Ferrari, SERGEEV, White, and Vesperman are considered to be analogous to the claimed invention because both of them are in the same field as modular image-capturing apparatus as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the given target value/range of the Ferrari with teaching of Vesperman. One of the ordinary skill in the art would have been motivated to make this modification so that the best fit edge can be automatically calculated by the processor (Vesperman, at least one para. 0081).
Regarding claim 14, White teaches (Original) The method according to claim 12, where the plurality of buffer values (White, at least one para. 0102; “A bounding algorithm can apply a buffer value such that a resulting bounding box 1306 is larger than the 3D model of the object 1302. For example, the value of minimum points may be reduced and the value of maximum points may be increased.”)
White does not explicitly teach that comprises a front buffer value and a rear buffer value, and wherein the method further comprises:
increasing the front buffer value from a current front buffer value to dynamically extend an action area ahead of the crop in the FOV from a point of view of the image-sensor when the modular apparatus moves towards the crop plant.
However, Vesperman in the same field of endeavor (Vesperman, at least one para. 0001; “This disclosure relates generally to a detection system for vehicles, and more specifically to detecting an edge between surfaces to control the operation (e.g., automated steering) of a farming vehicle.”) teaches comprises a front buffer value and a rear buffer value (Vesperman, at least one para. 0084; “Target offset min and max may be selected based on a type of operation that the vehicle is performing and a typical bounding box location for the type of operation. For example, if the vehicle is pulling implements with narrow working widths, the target offset min and max may be small (e.g., two to five feet). In another example, if the vehicle is pulling wide tillage implements, calibration may be done with target offset min and max that are larger (e.g., fifteen to seventeen feet)”, wherein the target offset min is the front buffer value and the target offset max is the rear buffer value”),
wherein the method further comprises:
increasing the front buffer value from a current front buffer value to dynamically extend an action area ahead of the crop in the FOV from a point of view of the image-sensor when the modular apparatus moves towards the crop plant (Vesperman, at least one para. 0084; “Target offset min and max may be selected based on a type of operation that the vehicle is performing and a typical bounding box location for the type of operation. For example, if the vehicle is pulling implements with narrow working widths, the target offset min and max may be small (e.g., two to five feet). In another example, if the vehicle is pulling wide tillage implements, calibration may be done with target offset min and max that are larger (e.g., fifteen to seventeen feet).”).
The combination of Ferrari, SERGEEV, White, and Vesperman are considered to be analogous to the claimed invention because both of them are in the same field as modular image-capturing apparatus as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the buffer values of White with teaching of Vesperman. One of the ordinary skill in the art would have been motivated to make this modification so that the attached vehicle implement is able to maximized its functionality (Vesperman, at least one para. 0084).
Regarding claim 15, White teaches (Original) The method according to claim 14 (White, at least one para. 0102; “A bounding algorithm can apply a buffer value such that a resulting bounding box 1306 is larger than the 3D model of the object 1302. For example, the value of minimum points may be reduced and the value of maximum points may be increased.”),
White does not explicitly teach that further comprising increasing the rear buffer value from a current front buffer value to dynamically extend the action area behind the crop plant in the FOV from the point of view of the image-sensor when the modular apparatus moves towards the crop plant, and
wherein the action area corresponds to a spray region comprising the crop plant.
However, Vesperman in the same field of endeavor (Vesperman, at least one para. 0001; “This disclosure relates generally to a detection system for vehicles, and more specifically to detecting an edge between surfaces to control the operation (e.g., automated steering) of a farming vehicle.”) teaches further comprising increasing the rear buffer value from a current front buffer value to dynamically extend the action area behind the crop plant in the FOV from the point of view of the image-sensor when the modular apparatus moves towards the crop plant (Vesperman, at least one para. 0084; “Target offset min and max may be selected based on a type of operation that the vehicle is performing and a typical bounding box location for the type of operation. For example, if the vehicle is pulling implements with narrow working widths, the target offset min and max may be small (e.g., two to five feet). In another example, if the vehicle is pulling wide tillage implements, calibration may be done with target offset min and max that are larger (e.g., fifteen to seventeen feet).”), and
wherein the action area corresponds to a spray region comprising the crop plant (Vesperman, at least one para. 0020; “Farming operations may include mowing, harvesting, spraying, tilling, etc. An example of a variation within one surface includes a soil surface with a type of crop planted and without crops planted.”).
The combination of Ferrari, SERGEEV, White, and Vesperman are considered to be analogous to the claimed invention because both of them are in the same field as modular image-capturing apparatus as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the buffer values of White with teaching of Vesperman. One of the ordinary skill in the art would have been motivated to make this modification so that the attached vehicle implement is able to maximized its functionality (Vesperman, at least one para. 0084).
Claim(s) 7 is rejected under 35 U.S.C. 103 as being unpatentable over Ferrari (US 20180210450 A1), White (US 20200302241 A1), and SERGEEV (US 20230137419 A1) as applied to claim 1 above, and further in view of DELJKOVIC (US 20240202922 A1).
Regarding claim 7, Ferrari teaches (Original) The modular apparatus according to claim 1, wherein the image-capture device (Ferrari, at least one para. 0031; “As shown in FIG. 1, in one embodiment, an imaging device 104 may be coupled to one of the sides of the work vehicle 10 such that the imaging device 104 has a field of view 106 that allows it to capture images of an adjacent area or portion 108 of the field disposed along the side of the work vehicle 10. ”)
Ferrari does not explicitly teach that further comprises a filter screen to prevent dust particles and UV light to enter the image-sensor.
However, DELJKOVIC in the same field of endeavor (DELJKOVIC, at least one para. 0001; “This invention relates to a plant management system.”) teaches further comprises a filter screen to prevent dust particles and UV light to enter the image-sensor (DELJKOVIC, at least one para. 0080; “The neutral density filter 402 may be designed to also absorb or reflect wavelengths outside the visible spectrum such as infrared or UV light, e.g. below 380 nm or above 740 nm.”, It is inherent that density filter 402 also stop dust particles from entering.) and (DELJKOVIC, at least one para. 0070; “As shown in FIG. 3B, housing 306 may include a shield 308 to prevent water droplets from rain or sprays adhering to the cover of the housing, preventing accumulation of dust and fine debris, and preventing splashes of mud or water from landing on the cover of the housing, all of which may affect the visibility of the CMOS sensor 304.”).
The combination of Ferrari, SERGEEV, White, and DELJKOVIC are considered to be analogous to the claimed invention because both of them are in the same field as modular image-capturing apparatus as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the image capturing device of Ferrari with teaching of DELJKOVIC. One of the ordinary skill in the art would have been motivated to make this modification so that CMOS sensor can be protected from dust and UV light thus improving the reliability of the image capturing device 104 (DELJKOVIC, at least one para. 0070 and 0080).
Claim(s) 8-9 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Ferrari (US 20180210450 A1), White (US 20200302241 A1), and SERGEEV (US 20230137419 A1) as applied to claim 1 and 12 above, respectively, and further in view of Pertsel (US 12165436 B1).
Regarding claim 8, Ferrari teaches (Original) The modular apparatus according to claim 1, wherein the control device (Ferrari, at least one para. 0031; “an associated controller 102”)
Ferrari does not explicitly teach that is further configured to execute mapping of pixel data of the FOV to distance information from a reference position of the control device when the vehicle is in motion.
However, Pertsel in the same field of endeavor (Pertsel, Col 1, lines 7-11; “The invention relates to vehicle occupancy detection generally and, more particularly, to a method and/or apparatus for implementing toll collection and carpool lane automation using in-vehicle computer vision and radar.”) teaches is further configured to execute mapping of pixel data of the FOV to distance information from a reference position of the control device when the vehicle is in motion (Pertsel, Col 23, lines 22-34; “The processors 106a-106n may determine the width of the reference objects (e.g., the number of pixels in the video frame). The width of the current size of the reference object may be compared to the stored width of the reference object to estimate a distance of the occupants of the ego vehicle 50 from the lens 112a-112n. For example, a number of pixels may be measured between the reference object and the head of the driver 202 to determine location coordinates of the head of the driver 202.”).
The combination of Ferrari, SERGEEV, White, and Pertsel are considered to be analogous to the claimed invention because both of them are in the same field as modular image-capturing apparatus as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the control device of Ferrari with teaching of Pertsel. One of the ordinary skill in the art would have been motivated to make this modification so that details within the mapping can be clearly identified (Pertsel, Col 23, lines 35-50).
Regarding claim 9, Ferrari teaches (Original) The modular apparatus according to claim 8, wherein the control device (Ferrari, at least one para. 0031; “an associated controller 102”)
Ferrari does not explicitly teach that is further configured to update the mapping of the pixel data of the FOV to the distance information based on a change in at least one of the plurality of buffer values.
However, Pertsel in the same field of endeavor (Pertsel, Col 1, lines 7-11; “The invention relates to vehicle occupancy detection generally and, more particularly, to a method and/or apparatus for implementing toll collection and carpool lane automation using in-vehicle computer vision and radar.”) teaches is further configured to update the mapping of the pixel data of the FOV to the distance information based on a change in at least one of the plurality of buffer values (Pertsel, Col 42, lines 34-49; “The dotted boxes 420a-420c and the dotted boxes 422a-422d may comprise the pixel data corresponding to an object detected by the computer vision operations pipeline 162 and/or the CNN module 150. The dotted boxes 420a-420c and the dotted boxes 422a-422d are shown for illustrative purposes. In an example, the dotted boxes 420a-420c and the dotted boxes 422a-422d may be a visual representation of the object detection (e.g., the dotted boxes 420a-420c and the dotted boxes 422a-422d may not appear on an output video frame displayed on one of the displays 118a-118n). In another example, the dotted boxes 420a-420c and the dotted boxes 422a-422d may be a bounding box generated by the processors 106a-106n displayed on the video frame to indicate that an object has been detected (e.g., the bounding boxes 420a-420c and the bounding boxes 422a-422d may be displayed in a debug mode of operation).”).
The combination of Ferrari, SERGEEV, White, and Pertsel are considered to be analogous to the claimed invention because both of them are in the same field as modular image-capturing apparatus as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the control device of Ferrari with teaching of Pertsel. One of the ordinary skill in the art would have been motivated to make this modification so that details within the mapping can be tracked (Pertsel, Col 23, lines 35-50).
Regarding claim 17, Ferrari teaches (Original) The method according to claim 12 (Ferrari, at least one para. 0031; “As shown in FIG. 1, in one embodiment, an imaging device 104 may be coupled to one of the sides of the work vehicle 10 such that the imaging device 104 has a field of view 106 that allows it to capture images of an adjacent area or portion 108 of the field disposed along the side of the work vehicle 10. ”),
Ferrari does not explicitly teach that further comprising executing mapping of pixel data of the FOV to distance information from a reference position of a control device of the modular apparatus when the vehicle is in motion.
However, Pertsel in the same field of endeavor (Pertsel, Col 1, lines 7-11; “The invention relates to vehicle occupancy detection generally and, more particularly, to a method and/or apparatus for implementing toll collection and carpool lane automation using in-vehicle computer vision and radar.”) teaches further comprising executing mapping of pixel data of the FOV to distance information from a reference position of a control device of the modular apparatus when the vehicle is in motion (Pertsel, Col 23, lines 22-34; “The processors 106a-106n may determine the width of the reference objects (e.g., the number of pixels in the video frame). The width of the current size of the reference object may be compared to the stored width of the reference object to estimate a distance of the occupants of the ego vehicle 50 from the lens 112a-112n. For example, a number of pixels may be measured between the reference object and the head of the driver 202 to determine location coordinates of the head of the driver 202.”).
The combination of Ferrari, SERGEEV, White, and Pertsel are considered to be analogous to the claimed invention because both of them are in the same field as modular image-capturing apparatus as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the control device of Ferrari with teaching of Pertsel. One of the ordinary skill in the art would have been motivated to make this modification so that details within the mapping can be clearly identified (Pertsel, Col 23, lines 35-50).
Regarding claim 18, Ferrari teaches (Original) The method according to claim 17 (Ferrari, at least one para. 0031; “As shown in FIG. 1, in one embodiment, an imaging device 104 may be coupled to one of the sides of the work vehicle 10 such that the imaging device 104 has a field of view 106 that allows it to capture images of an adjacent area or portion 108 of the field disposed along the side of the work vehicle 10. ”),
Ferrari does not explicitly teach that further comprising updating the mapping of the pixel data of the FOV to the distance information based on a change in at least one of the plurality of buffer values.
However, Pertsel in the same field of endeavor (Pertsel, Col 1, lines 7-11; “The invention relates to vehicle occupancy detection generally and, more particularly, to a method and/or apparatus for implementing toll collection and carpool lane automation using in-vehicle computer vision and radar.”) teaches further comprising updating the mapping of the pixel data of the FOV to the distance information based on a change in at least one of the plurality of buffer values (Pertsel, Col 42, lines 34-49; “The dotted boxes 420a-420c and the dotted boxes 422a-422d may comprise the pixel data corresponding to an object detected by the computer vision operations pipeline 162 and/or the CNN module 150. The dotted boxes 420a-420c and the dotted boxes 422a-422d are shown for illustrative purposes. In an example, the dotted boxes 420a-420c and the dotted boxes 422a-422d may be a visual representation of the object detection (e.g., the dotted boxes 420a-420c and the dotted boxes 422a-422d may not appear on an output video frame displayed on one of the displays 118a-118n). In another example, the dotted boxes 420a-420c and the dotted boxes 422a-422d may be a bounding box generated by the processors 106a-106n displayed on the video frame to indicate that an object has been detected (e.g., the bounding boxes 420a-420c and the bounding boxes 422a-422d may be displayed in a debug mode of operation).”).
The combination of Ferrari, SERGEEV, White, and Pertsel are considered to be analogous to the claimed invention because both of them are in the same field as modular image-capturing apparatus as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the control device of Ferrari with teaching of Pertsel. One of the ordinary skill in the art would have been motivated to make this modification so that details within the mapping can be tracked (Pertsel, Col 23, lines 35-50).
Claim(s) 10, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ferrari (US 20180210450 A1), White (US 20200302241 A1), and SERGEEV (US 20230137419 A1) as applied to claim 1 and 12 above, respectively, and further in view of KWAK (US 20220132828 A1).
Regarding claim 10, Ferrari teaches (Original) The modular apparatus according to claim 1, wherein the detection and tracking of the crop plant in the defined area from the captured sequence of images (Ferrari, at least one para. 0042; “Referring still to FIG. 3, in several embodiments, the instructions 116 stored within the memory 112 of the controller 102 may be executed by the processor(s) 110 to implement an image analysis module 126. In general, the image analysis module 126 may be configured to analyze the images received by the imaging device(s) 104 to allow the controller 102 to estimate one or more crop residue parameters associated with the field currently being tilled.”)
Ferrari does not explicitly teach that is further based on a defined confidence threshold that is indicative of a detection sensitivity related to the crop plant.
However, KWAK in the same field of endeavor (KWAK, at least one para. 0002; “The present description generally relates to agricultural sprayers or other agricultural applicators that apply a substance to a field. More specifically, but not by limitation, the present description relates to visualization and control of an agricultural sprayer or other applicator machine.”) teaches is further based on a defined confidence threshold that is indicative of a detection sensitivity related to the crop plant (KWAK, at least one para. 0065; “Also, at block 412, a detection sensitivity can be identified for the selected camera 260. Again, the detection sensitivity can be a default setting, as represented at block 414, and/or user selected, as represented at block 416. The detection sensitivity controls operation of the imaging system in acquiring images and processing those images to determine the location of plants to be sprayed.”).
The combination of Ferrari, SERGEEV, White, and KWAK are considered to be analogous to the claimed invention because both of them are in the same field as modular image-capturing apparatus as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the detection and tracking of the crop plant of Ferrari with teaching of KWAK. One of the ordinary skill in the art would have been motivated to make this modification so that the plant size of the target plant can be detected subsequent steps (KWAK, 0065).
Regarding claim 19, Ferrari teaches (Original) The method according to claim 12, wherein the detection and tracking of the crop plant in the defined area from the captured sequence of images (Ferrari, at least one para. 0042; “Referring still to FIG. 3, in several embodiments, the instructions 116 stored within the memory 112 of the controller 102 may be executed by the processor(s) 110 to implement an image analysis module 126. In general, the image analysis module 126 may be configured to analyze the images received by the imaging device(s) 104 to allow the controller 102 to estimate one or more crop residue parameters associated with the field currently being tilled.”)
Ferrari does not explicitly teach that is further based on a defined confidence threshold that is indicative of a detection sensitivity related to the crop plant.
However, KWAK in the same field of endeavor (KWAK, at least one para. 0002; “The present description generally relates to agricultural sprayers or other agricultural applicators that apply a substance to a field. More specifically, but not by limitation, the present description relates to visualization and control of an agricultural sprayer or other applicator machine.”) teaches is further based on a defined confidence threshold that is indicative of a detection sensitivity related to the crop plant (KWAK, at least one para. 0065; “Also, at block 412, a detection sensitivity can be identified for the selected camera 260. Again, the detection sensitivity can be a default setting, as represented at block 414, and/or user selected, as represented at block 416. The detection sensitivity controls operation of the imaging system in acquiring images and processing those images to determine the location of plants to be sprayed.”).
The combination of Ferrari, SERGEEV, White, and KWAK are considered to be analogous to the claimed invention because both of them are in the same field as modular image-capturing apparatus as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the detection and tracking of the crop plant of Ferrari with teaching of KWAK. One of the ordinary skill in the art would have been motivated to make this modification so that the plant size of the target plant can be detected subsequent steps (KWAK, 0065).
Regarding claim 20, KWAK teaches (Original) The method according to claim 19 (KWAK, at least one para. 0065; “Also, at block 412, a detection sensitivity can be identified for the selected camera 260. Again, the detection sensitivity can be a default setting, as represented at block 414, and/or user selected, as represented at block 416. The detection sensitivity controls operation of the imaging system in acquiring images and processing those images to determine the location of plants to be sprayed.”), further comprising changing the defined confidence threshold to cause a change in detection and tracking of the crop plant in the defined area from the captured sequence of images (KWAK, at least one para. 0065; “Also, block 412 can include changes to the functionality of image processing component 238 that processes the images acquired by the particular camera 260. Changes to the plant detection sensitivity is represented at block 420. For example, functionality of image processing component 238 can define the size of the target plants that will be detected through the image processing. For sake of illustration, increases to the camera and/or plant detection sensitivity can result in the application of the agricultural substance to more areas of the field, as more plants are detected. Conversely, decreases to the camera and/or plant detection sensitivity can result in the agricultural substance being applied to less areas of the field as less target plants are detected.”).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to UPUL P CHANDRASIRI whose telephone number is (703)756-5823. The examiner can normally be reached M-F 8.30 am to 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christian Chace can be reached at 571-272-4190. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/U.P.C./ Examiner, Art Unit 3665 /CHRISTIAN CHACE/Supervisory Patent Examiner, Art Unit 3665